Open Thread: February 2010
post by wedrifid · 2010-02-01T06:09:38.982Z · LW · GW · Legacy · 756 commentsContents
756 comments
Where are the new monthly threads when I need them? A pox on the +11 EDT zone!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
If you're new to Less Wrong, check out this welcome post.
756 comments
Comments sorted by top scores.
comment by Alicorn · 2010-02-02T02:15:46.830Z · LW(p) · GW(p)
Since Karma Changes was posted, there have been 20 top level posts. With one exception, all of those posts are presently at positive karma. EDIT: I was using the list on the wiki, which is not up to date. Incorporating the posts between the last one on that list and now, there is a total of 76 posts between Karma Changes and today. This one is the only new data point on negatively rated posts, so it's 2 of 76.
I looked at the 40 posts just prior to Karma Changes, and of the forty, six of them are still negative. It looks like before the change, many times more posts were voted into the red. I have observed that a number of recent posts were in fact downvoted, sometimes a fair amount, but crept back up over time.
Hypothesis: the changes included removing the display minimum of 0 for top-level posts. Now that people can see that something has been voted negative, instead of just being at 0 (which could be the result of indifference), sympathy kicks in and people provide upvotes.
Is this a behavior we want? If not, what can we do about it?
Replies from: Jack, wedrifid, ciphergoth, Wei_Dai, CarlShulman, MrHen, MrHen, MichaelHoward, MichaelHoward, MichaelHoward, billswift, wedrifid↑ comment by wedrifid · 2010-02-02T09:44:15.698Z · LW(p) · GW(p)
Is this a behavior we want?
No. It is not difficult to create a top level post that is approved of or at least kept at '0'. I want undesirable top level posts to hurt.
If not, what can we do about it?
Replace all '-ve' karma value displays of top level posts with '- points' or '<0 points'. We don't necessarily need to know just how disapproved of a particular post is.
Replies from: MrHen↑ comment by Paul Crowley (ciphergoth) · 2010-02-02T11:03:28.950Z · LW(p) · GW(p)
I've called before for median-based karma: you set a score you think a post should have and the median is used for display purposes, with "fake votes" reducing the influence of individual votes until there are enough to gain a true picture.
Replies from: MBlume↑ comment by MBlume · 2010-02-04T05:13:54.513Z · LW(p) · GW(p)
Arrow's Theorem seems relevant...
Replies from: SilasBarta, ciphergoth↑ comment by SilasBarta · 2010-02-04T05:20:29.528Z · LW(p) · GW(p)
Replies from: wnoise, MBlume↑ comment by wnoise · 2010-02-04T06:21:16.790Z · LW(p) · GW(p)
That doesn't really avoid the issues in Arrow's Theorem, merely blunts them, assuring us that we shouldn't actually care about IIA. However, the fact that this karma scale is one-dimensional combined with the assumption that people have a singly-peaked preference function does show that this is one of those cases where Arrow's Theorem doesn't apply. Median is a good choice because it's not terribly gamable.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-02-04T06:25:37.436Z · LW(p) · GW(p)
Actually, the point of the linked article was that irrelevant alternatives aren't. Rather, they reveal information about relative strengths of preferences IF, as Arrow's Theorem's assumes, you are restricted to voting methods involving ordinal ranking of the options.
Therefore, you can avoid the claimed problems by being able to express the magnitude of your preference, not just its ranking against others, which is the idea proposed here.
↑ comment by Paul Crowley (ciphergoth) · 2010-02-04T08:17:52.086Z · LW(p) · GW(p)
"One-dimensional" preferences are a special case, and I think solvable.
↑ comment by Wei Dai (Wei_Dai) · 2010-02-02T10:04:16.431Z · LW(p) · GW(p)
It could be sympathy, or a judgment that the poster shouldn't be excessively discouraged from posting in the future.
Is this a behavior we want?
Sure, why not? We can always change things later if we start getting overrun by bad posts, and people still aren't willing to vote them down into negative territory.
↑ comment by CarlShulman · 2010-02-05T18:42:52.095Z · LW(p) · GW(p)
There is a limited downvote budget for each voter (in some ratio to the voter's budget). Downvoting a post now uses 10 points from that budget rather than 1, so perhaps low-karma downvoters (or downvoters who have exhausted their downvote budgets) are now having less of an impact.
↑ comment by MrHen · 2010-02-08T22:10:22.099Z · LW(p) · GW(p)
I like seeing the negative number on my posts. But I have also noticed a voting trend that seems to be much more forgiving than the posts of old.
The first wave of readers seem to vote up; the second wave votes down; over time it stabilizes somewhere near where the first wave peaked. This doesn't seem to happen on posts that are really superb.
I think showing the full number of up and down votes would be helpful to authors and also let people know why a post is at the number it is. Seeing +5 -7 is different than seeing -2.
That being said, karma inflation seems to be hitting. I am rarely getting downvoted on comments anymore. I don't think I have improved that much as a commentator. I am not convinced that the effects you are seeing are only happening to Posts.
Is this a behavior we want? If not, what can we do about it?
I think a great way to handle the Post karma is to hide the actual number for a week. Let it show + or - for positive or negative but no numbers. By the time one week has passed most people will have moved on.
Another solution may be to keep actual voting history available and let people see votes by people who said their history is public. As far as I can tell, that preference doesn't do anything yet.
ETA: Another solution would be to set karma rewards to only happen after a certain threshold. Between 0 and +5 you don't get any karma. After that, you get 10 karma per point. Everything under 0 still penalizes you 10 karma per point.
Or the above but only getting rewards after a certain percentage votes up. +5 -1 nets 40 karma, +20 -16 nets nothing, but each have a score of +4.
Replies from: Alicorn↑ comment by Alicorn · 2010-02-08T22:18:52.464Z · LW(p) · GW(p)
Voting history publication does do something - click on a user's name, and then click "liked" or "disliked", and you can see what top-level posts they have voted up or down. It just doesn't work backwards, and doesn't work for comments.
Replies from: MrHen↑ comment by MrHen · 2010-02-08T22:29:43.469Z · LW(p) · GW(p)
Debunking komponisto is negative but appears to have been removed from the list of recent posts.
Replies from: Alicorn↑ comment by MichaelHoward · 2010-02-08T21:40:22.477Z · LW(p) · GW(p)
what can we do about it?
Make posting cost karma? That raises the break-even bar. I'm sure Open Threads etc that tend to sit near zero will magically get voted up to wherever that bar is after the rule change.
If for example the cost was ten times the 25th percentile of post scores, you know you'll lose Karma if your post is in the bottom quarter of less wrong posts.
Replies from: MrHen↑ comment by MrHen · 2010-02-08T22:17:30.347Z · LW(p) · GW(p)
Make posting cost karma? That raises the break-even bar. I'm sure Open Threads etc that tend to sit near zero will magically get voted up to wherever that bar is after the rule change.
Each upvote is worth 10 karma, each downvote is worth -15? Borderline and controversial posts would get hit hard by this.
Anything this complicated, though, owes it to the author to see both numbers.
↑ comment by MichaelHoward · 2010-02-08T21:39:46.234Z · LW(p) · GW(p)
what can we do about it?
Make posting cost karma? That raises the break-even bar. I'm sure Open Threads etc that tend to sit near zero will magically get voted up to wherever that bar is after the rule change.
If for example the cost was ten times the 25th percentile, you know you'll lose Karma if your post is in the bottom quarter of less wrong posts.
↑ comment by MichaelHoward · 2010-02-08T21:35:36.248Z · LW(p) · GW(p)
what can we do about it?
Make posting cost karma? That raises the break-even bar. I'm sure Open Threads etc that tend to sit near zero will magically get voted up to wherever that bar is after the rule change.
↑ comment by billswift · 2010-02-02T09:29:57.233Z · LW(p) · GW(p)
I wouldn't necessarily call it sympathy. Sometimes I will up- (or down-) vote something if I think it is better (or worse) than its current score suggests. The purpose of karma on articles should be to identify those most worth reading to those who haven't yet read them, not to be a popularity contest where everyone who disliked it votes it down forever.
Replies from: wedrifid↑ comment by wedrifid · 2010-02-02T09:47:47.994Z · LW(p) · GW(p)
I also tend to vote posts up or down based on what I think the score ought to be. But it seems clear that sympathy plays a part. Liked posts spiral freely off towards infinity but disliked posts don't ever spiral down in a similar way. This gives a distinct bias to the expected payoff of posting borderline posts and so is probably not desirable.
↑ comment by wedrifid · 2010-02-02T10:11:30.656Z · LW(p) · GW(p)
If not, what can we do about it?
Vote the posts up. One month later, reverse your vote. (Obviously your reasons for wanting a particular karma level for a post matter.)
Replies from: MrHen↑ comment by MrHen · 2010-02-08T22:03:37.932Z · LW(p) · GW(p)
This really messes with how I, as an author, rely on karma as feedback for how well my post was received.
I hate all karma games more complicated than, "I liked/disliked/didn't-care-about this post."
Replies from: wedrifid↑ comment by wedrifid · 2010-02-09T02:52:07.701Z · LW(p) · GW(p)
Sympathy upvotes are already games more complicated than "liked/disliked".
It's not a desirable solution. It's just the best literal answer an individual can have to the "what can we do about it?" question that does not rely on political advocacy. It's true whether or not people hate it.
comment by Seth_Goldin · 2010-02-01T20:01:14.836Z · LW(p) · GW(p)
Eliezer, how is progress coming on the book on rationality? Will the body of it be the sequences here, but polished up? Do you have an ETA?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-13T06:08:36.659Z · LW(p) · GW(p)
Currently planned to be divided into three parts, "Map and Territory", "How To Actually Change Your Mind", and "Mysterious Answers to Mysterious Questions" - that should give you an idea of the intended content. No ETA, still struggling to find a writing methodology that gets up to an acceptable writing speed.
comment by Stuart_Armstrong · 2010-02-01T09:44:52.498Z · LW(p) · GW(p)
Eliezer's posts are always very thoughful, thought provoquing and mind expanding - and I'm not the only one to think this, seeing the vast amounts of karma he's accumulated.
However, reviewing some of the weaker posts (such as high status and stupidity and two aces ), and rereading them as if they hadn't been written by Eliezer, I saw them differently - still good, but not really deserving superlative status.
So I was wondering if Eliezer could write a few of his posts under another name, if this was reasonable, to see if the Karma reaped was very different.
Replies from: Eliezer_Yudkowsky, komponisto↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T10:40:19.545Z · LW(p) · GW(p)
This is a reasonable justification for using a sockpuppet, and I'll try to keep it in mind the next time I have something to write that would not be instantaneously identifiable as me.
Replies from: Alicorn, RobinZ↑ comment by Alicorn · 2010-02-01T14:41:09.452Z · LW(p) · GW(p)
But you'll have to build up the sockpuppet to 50 points before it can make a top post. Can you write that many comments that aren't identifiable as yours?
Replies from: byrnema, John_Maxwell_IV, gregconen↑ comment by byrnema · 2010-02-01T16:18:41.432Z · LW(p) · GW(p)
Perhaps, contact someone likely and ask them to paraphrase the post in their words and submit it as their own?
Now we'll be getting all kinds of posts with, "Eliezer did not write this..or maybe he did!" ...
Replies from: MrHen, SilasBarta↑ comment by MrHen · 2010-02-01T16:25:38.892Z · LW(p) · GW(p)
That is an interesting concept to toy with user expectations. I don't know how well it would be received but I'd love to see data from such an experimentation.
Replies from: Kevin↑ comment by Kevin · 2010-02-02T09:51:18.990Z · LW(p) · GW(p)
I wouldn't, it's not going to be meaningful after one or two tries.
I suppose it could be interesting if it was announced in advance that Eliezer was going to try it and then we could spend the next few months accusing each other of being Eliezer witch-hunt style, except with Bayesian priors. Seriously, I am in favor of doing it that way.
↑ comment by SilasBarta · 2010-02-01T19:10:34.369Z · LW(p) · GW(p)
Thinking of the recent art threadjack I was party to recently -- I wish the art community would do that! Without the paraphrasing, though.
And then, of course, be sure to get independent evaluations of a work before discussing it with anyone to prevent information cascades.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2010-02-03T09:11:09.352Z · LW(p) · GW(p)
I think it would be acceptable for him, as a site administrator, to doctor the scores of his own comments behind the scenes to make his sockpuppet pass that threshold.
↑ comment by gregconen · 2010-02-01T17:12:39.897Z · LW(p) · GW(p)
It's easy if you have a few co-conspirators. Find five quotes, post them on the quotes thread, ask 9 people to vote each one up (and vote them up as Eliezer Yudkowsky). It probably wouldn't even take that many, since some would certainly be voted up on their own.
But perhaps it would be better, if possible, to hide (or least offer the option to hide) the author of a top-level post. Anyone who cared enough to closely track karma could tell who posted it, but it would weed out a lot of knee-jerk EY upvotes.
↑ comment by RobinZ · 2010-02-01T13:52:36.927Z · LW(p) · GW(p)
I was about to mention your distinctive writing style. :)
Replies from: CarlShulman↑ comment by CarlShulman · 2010-02-01T15:45:53.103Z · LW(p) · GW(p)
Yvain writes in a consciously similar style, and gets even more karma than Eliezer per post, I think.
↑ comment by komponisto · 2010-02-01T11:21:41.393Z · LW(p) · GW(p)
It has seemed to me that some of Eliezer's recent post scores have been inflated by around 5-10 points due to his being Eliezer; it would be interesting to test this hypothesis.
Replies from: Unknowns↑ comment by Unknowns · 2010-02-01T11:36:20.703Z · LW(p) · GW(p)
I wonder if, if the hypothesis were tested and confirmed, anyone would admit to being one of the 5-10 persons who upvote for that reason?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2010-02-01T11:49:26.756Z · LW(p) · GW(p)
I'm one of the 5-10.
There is a depth to "this is an Eliezer agument, part of a rich and complicated mental world with many different coherent aspects to it" that is lacking in "this is a random post on a random subject". In the first case, you are seeing a facet of larger wisdom; in the second, just an argument to evaluate on merits.
comment by Wei Dai (Wei_Dai) · 2010-02-05T04:56:33.609Z · LW(p) · GW(p)
I thought of a voting tip that I'd like to share: when you are debating someone, and one of your opponent's comment gets downvoted, don't let it stay at -1. Either vote it up to 0, or down to -2, otherwise your opponent might infer that you are the one who downvoted it. Someone accused me of this some time ago, and I've been afraid of it happening again ever since.
It took a long time for this countermeasure to occur to me, probably because the natural reaction when someone accuses you of unfair downvoting is to refrain from downvoting, while the counterintuitive, but strategically correct response is to downvote more.
Replies from: Kaj_Sotala, bgrah449, Zack_M_Davis, Alicorn, Jack, thomblake↑ comment by Kaj_Sotala · 2010-02-17T09:05:51.090Z · LW(p) · GW(p)
An automatic block against downvoting any comment that's a direct response to one of yours would be good.
↑ comment by bgrah449 · 2010-02-05T18:51:23.800Z · LW(p) · GW(p)
My karma management techniques:
1) If I'm in a thread and someone's comment is rated equally with mine, and therefore potentially displaying atop my comment, I downvote theirs until it'll pass mine despite my downvote, to give my comment more exposure. I remove the downvote later, usually upvoting (their comment is getting voted better than mine because it's good).
2) If I'm debating someone and I want to downvote their comment, I upvote it for a day or so, then later return to downvote it. This gives the impression that two objective observers who read the thread later agreed with me. This works best on long debate threads, because a) if my partner's comments are getting immediately upvoted, they tend to be encouraged and will continue the debate, further exposing themselves to downvotes and b) they get fewer reads, so a single vote up or down makes a much bigger impression when almost all the comments in the thread are rarely upvoted/downvoted past +/- 2.
3) Karma is really about rewarding or punishing an author for content, to encourage certain types of content. Comments that are too aggressive will not be upvoted even if people agree with the point, because they don't want to reward aggressive behavior. Likewise, comments that are not aggressive enough are given extra karma - the reader's first instinct is to help promote this message because the timid author won't promote it enough on his own. This is nonsensical in this format, but the instinct is preserved.
I've noticed that the comments that get voted up the most are those that do probability calculations, those whose authors' names pop out of the page, and those which are cynical on the surface, possibly with a wry humor, while revealing a deep earnestness. If you have something unpopular to say, or are just plain losing an argument, that's the best tone to take, because people will avoid downvoting if they disagree, but will usually upvote if they do agree.
EDIT: I agree with Alicorn that votes shouldn't be anonymous, as it would remove the dirtiest of these variably dirty techniques, but in the meantime, play to win.
Replies from: ata, Unknowns, michaelkeenan, Zack_M_Davis, byrnema, Douglas_Knight, loqi, wedrifid, Jack↑ comment by Unknowns · 2010-02-06T06:32:44.884Z · LW(p) · GW(p)
I can't believe you actually admitted to using these strategies.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-06T07:36:02.677Z · LW(p) · GW(p)
It does make me impressed at his cleverness.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-06T08:47:56.387Z · LW(p) · GW(p)
Not me. At least for points 1 and 2, these strategies have occurred to me, but they're, you know, wrong.
As for point 3, I like that we so strongly discourage aggression. I think that aggression and overconfidence of tone are usually big barriers to rational discussion.
Replies from: bgrah449, Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-06T08:55:35.953Z · LW(p) · GW(p)
Not me. At least for points 1 and 2, these strategies have occurred to me
Does that mean you're not impressed at your own cleverness either? :-)
Since I decided to avoid discussing karma, I'll keep my thoughts on the rest of your comment to myself. (But you can probably guess what they are.)
↑ comment by michaelkeenan · 2010-02-15T12:49:53.788Z · LW(p) · GW(p)
I don't like that you are trying to mislead others.
"Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever." - The Black Belt Bayesian
The deception you've described is of course minor and maybe you don't lie about important things. But it seems a dangerous strategy, for your own epistemic hygiene, to be casual with the truth. Even if I didn't regard it as ethically questionable, I wouldn't be habitually dishonest for the sake of my own mind.
↑ comment by Zack_M_Davis · 2010-02-05T19:00:36.456Z · LW(p) · GW(p)
in the meantime, play to win
To win what? What is there to win?
Replies from: CannibalSmith, bgrah449↑ comment by CannibalSmith · 2010-02-17T08:52:59.897Z · LW(p) · GW(p)
The same thing you play Tetris or any other game for. Whatever that is.
Replies from: Jack↑ comment by byrnema · 2010-02-05T18:59:40.444Z · LW(p) · GW(p)
Your last paragraph was astute.
I found this shocking:
If I'm debating someone and I want to downvote their comment, I upvote it for a day or so, then later return to downvote it. This gives the impression that two objective observers who read the thread later agreed with me.
I wouldn't game the system like this not so much because of moral qualms (playing to win seems OK to me) but because I need straight-forward karma information as much as possible in order to evaluate my comments. Psychology and temporal dynamics are surely important, but without holding them constant (or at least 'natural') then the system would be way too complex for me to continue modeling and learning from.
Replies from: bgrah449↑ comment by bgrah449 · 2010-02-05T19:08:03.701Z · LW(p) · GW(p)
But in a debate, inasmuch as you're relying on the community's consensus to reveal you're right about something, I would prefer to manipulate that input to make it favor me.
Replies from: byrnema, ciphergoth, loqi↑ comment by byrnema · 2010-02-05T19:56:23.251Z · LW(p) · GW(p)
I thought about it further, and decided that I would have moral qualms about it. First, you are insincerely up-voting someone, and they are using this as peer information about their rationality. Second, you are encouraging a person C to down-vote them (person B) if they think person B's comment should just be at 0. But then when you down-vote B, their karma goes to -2, which person C did not intend to do with his vote.
So I think this policy is just adding noise to the system, which is not consistent with the LW norm of wanting a high signal to noise ratio.
Replies from: bgrah449↑ comment by bgrah449 · 2010-02-05T20:22:49.660Z · LW(p) · GW(p)
I am insincerely up-voting someone: True.
They are using this as peer information about their rationality: People are crazy, the world is mad. Besides, who really considers the average karma voter their peer?
Encouraging a person C to down-vote them: Also, person D who only upvotes because they see someone else already upvoted, so they know they won't upvote alone.
Replies from: Unknowns↑ comment by Unknowns · 2010-02-06T17:57:17.935Z · LW(p) · GW(p)
It isn't crazy or mad to consider people who vote on your comments as on average equal to you in rationality. Quite the opposite: if each of us assumes that we are more rational than those who vote, this will be like everyone thinking that he is above average in driving ability or whatever.
And in fact, many people do use this information: numerous times someone has said something like, "Since my position is against community consensus I think I will have to modify it," or something along these lines.
Replies from: orthonormal↑ comment by orthonormal · 2010-02-07T22:10:50.455Z · LW(p) · GW(p)
And in fact, many people do use this information: numerous times someone has said something like, "Since my position is against community consensus I think I will have to modify it," or something along these lines.
Well, certainly not in those terms, but I've seen things along the lines of "EDIT: Am I missing something?" on comments that get downvoted (from a user who isn't used to being downvoted, generally). Those can have a positive effect.
↑ comment by Paul Crowley (ciphergoth) · 2010-02-06T08:50:04.558Z · LW(p) · GW(p)
Why are you concerned that you win the debate? I'm sure this sounds naive, but surely your concern should be that the truth win the debate?
Replies from: bgrah449↑ comment by bgrah449 · 2010-02-06T17:18:07.718Z · LW(p) · GW(p)
If my debate partner is willing to change his mind or stop debating because the community disagrees, I want to know that. I also don't think a) the community's karma votes represent some sort of evidence of an argument's rightness or b) that anyone has a right to such evidence that this tactic denies them.
Replies from: wedrifid↑ comment by wedrifid · 2010-02-06T18:09:54.423Z · LW(p) · GW(p)
You could make better arguments for your tactic than the ones you are making.
a) the community's karma votes represent some sort of evidence of an argument's rightness
It does. Noisy, biased evidence but still evidence. If I am downvoted I will review my position, make sure it is correct and trace out any the likely status related reasons for downvoting that would give an indication on how much truth value I think the votes contain.
↑ comment by loqi · 2010-02-05T19:17:11.105Z · LW(p) · GW(p)
But it's preferable to be wrong.
Replies from: bgrah449↑ comment by bgrah449 · 2010-02-05T19:24:34.138Z · LW(p) · GW(p)
For who? Quote from my comment:
Publicly failing in the quantity necessary to maximize your learning growth is very low-status and not many people have the stomach for it.
We have preferences for what we want to experience, and we have preferences for what those preferences are. We prefer to prefer to be wrong, but it's rare we actually prefer it. Readily admitting you're wrong is the right decision morally, but practically all it does is incentivize your debate partners to go ad hominem or ignore you.
Replies from: loqi, michaelkeenan↑ comment by loqi · 2010-02-05T19:39:31.129Z · LW(p) · GW(p)
We prefer to prefer to be wrong, but it's rare we actually prefer it.
Well, if I prefer to prefer being wrong, then I plan ahead accordingly, which includes a policy against ridiculous karma games motivated by fleeting emotional reactions.
but practically all it does is incentivize your debate partners to go ad hominem or ignore you
So my options are:
- Attempt to manipulate the community into admitting I'm right, or
- Eat the emotional consequences of being called names and ignored, in exchange for either honest or visibly inappropriate feedback from my debate partners.
I'll go with 2. Sorry about your insecurities.
Replies from: bgrah449↑ comment by bgrah449 · 2010-02-05T19:52:08.012Z · LW(p) · GW(p)
Sorry about your insecurities.
Does this count as honest or visibly inappropriate feedback?
I value 1 over 2. Quality of feedback is, as expected, higher in 2, but comes infrequently enough that I estimate 1 wins out over a long period of time by providing less quality at a higher rate.
Replies from: loqi↑ comment by loqi · 2010-02-05T20:31:16.577Z · LW(p) · GW(p)
My last sentence was a deliberate snark, but it's "honest" in the sense that I'm attempting to communicate something that I couldn't find a simpler way to say (roughly: that I think you're placing too much importance on "feeling right", and that I dismiss that reaction as not being a "legitimate" motivation in this context).
I have no problem making status-tinged statements if I think they're productive - I'll let the community be the judge of their appropriateness. There's definitely a fine line between efficiency and distraction, I have no delusions of omniscience concerning its location. I'm pretty sure that participation in this community has shaved off a lot of pointless attitude from my approach to online discourse. Feedback is good.
I disagree quantitatively with your specific conclusion concerning quality vs quantity, but I don't see any structural flaw in your reasoning.
Replies from: bgrah449↑ comment by michaelkeenan · 2010-02-15T12:54:07.622Z · LW(p) · GW(p)
But how can you have any self-respect, knowing that you prefer to feel right than be right? For me, the feeling of being being wrong is much less-bad than believing I'm so unable to handle being wrong that I'm sabotaging the beliefs of myself and those around me. I would regard myself as pathetic, if I made decisions like that.
↑ comment by Douglas_Knight · 2010-02-06T06:05:20.571Z · LW(p) · GW(p)
I upvote it for a day or so, then later return to downvote it. This gives the impression that two objective observers who read the thread later agreed with me.
This strategy can be eliminated by showing a count of both upvotes and downvotes, a change which has been requested for a variety of other reasons. I imagine it solves a lot of problems of anonymity, but it makes Wei Dai's dilemma worse. It makes downvoting the -1 preferable to upvoting it.
↑ comment by loqi · 2010-02-05T19:33:53.210Z · LW(p) · GW(p)
Karma is really about rewarding or punishing an author for content, to encourage certain types of content. Comments that are too aggressive will not be upvoted even if people agree with the point, because they don't want to reward aggressive behavior [...] This is nonsensical in this format, but the instinct is preserved.
Karma can be (and by your own admission, is) about more than first-order content. Excessively aggressive comments may not themselves contain objectionable content, but they tend to have a deleterious effect on the conversation, which certainly does affect subsequent content.
Replies from: bgrah449↑ comment by bgrah449 · 2010-02-05T19:39:23.677Z · LW(p) · GW(p)
Excessively aggressive comments may not themselves contain objectionable content, but they tend to have a deleterious effect on the conversation, which certainly does affect subsequent content.
(General "you") Only if you see the partner who is the target of aggression as your equal. If you get the impression that target is below your status, or deserves to be, you will reward the comment's aggression with an upvote.
Replies from: loqi↑ comment by loqi · 2010-02-05T19:46:04.303Z · LW(p) · GW(p)
Are you speaking descriptively, or normatively? Your "karma is really about" statement led me to believe the latter, but this comment seems to lean toward the former. Could you link to some aggressive comments whose upvotes appear to be driven by status rather than the content they're replying to?
Replies from: bgrah449↑ comment by bgrah449 · 2010-02-05T19:57:55.980Z · LW(p) · GW(p)
Descriptively. I'll dig some up.
Replies from: CannibalSmith↑ comment by CannibalSmith · 2010-02-17T12:53:46.492Z · LW(p) · GW(p)
Ding! This is a reminder. It's been 12 days since you promised to dig some up.
↑ comment by wedrifid · 2010-02-05T18:58:26.324Z · LW(p) · GW(p)
I don't recall ever debating with you but knowing your strategy could potentially change the course of future debates. The usual 'karma management', and the more general 'Laws of Power' would suggest that keeping this strategy to yourself is probably wise. Of course, there are exceptions to that strategy too...
Replies from: bgrah449↑ comment by Jack · 2010-02-07T22:54:04.047Z · LW(p) · GW(p)
What I really want to do is destroy you karma-wise. This behavior deserves to be punished severely. But I'm now worried about a chilling effect on others who do this coming forward.
Also, everyone, see poll below.
Replies from: pjeby, Jack↑ comment by pjeby · 2010-02-08T05:28:51.419Z · LW(p) · GW(p)
What I really want to do is destroy you karma-wise. This behavior deserves to be punished severely. But I'm now worried about a chilling effect on others who do this coming forward.
I want to downvote you for this, because punishing people for telling the truth is a bad thing. On the other hand, you are also telling the truth, so... now I'm confused. ;-)
Replies from: Jack↑ comment by Jack · 2010-02-07T22:56:30.819Z · LW(p) · GW(p)
If you have ever used one of bgrah's techniques, or some other karma manipulation technique that you believe would be widely frowned upon here vote this comment up.
(Since apparently you people think this is a game, You can down vote the comment beneath this so I don't beat you.)
EDIT: I seriously have to say this? If you don't like there being a poll vote down the above comment or the karma balancer below. Don't just screw up the poll out of spite.
Replies from: byrnema, bgrah449, wedrifid, Alicorn, Jack↑ comment by byrnema · 2010-02-07T23:51:13.202Z · LW(p) · GW(p)
If you have ever used one of bgrah's techniques, or some other karma manipulation technique that you believe would be widely frowned upon here vote this comment up.
I am considering voting up in order to tilt things in favor of making votes de-anonymized. Ironically, as soon as I do so, it's true..
↑ comment by bgrah449 · 2010-02-08T04:13:04.571Z · LW(p) · GW(p)
If it's not a game, why punish me? What's so offensive about me having high karma?
Replies from: Jack, Kevin↑ comment by Jack · 2010-02-08T05:04:08.766Z · LW(p) · GW(p)
There is nothing offensive about you having high karma. It is offensive that you you abused a system that a lot of us rely on for evaluating content and encouraging norms that lead to the truth. Truth-seeking is a communal activity and undermining the system that a community uses to find the truth is something we should punish. It's similar to learning that you had lied in a comment.
I imagine the vast majority of your karma is not ill-gotten, I have no problem with you having it.
Anyway, I haven't voted you down for precedent setting reasons.
↑ comment by Alicorn · 2010-02-07T22:58:35.006Z · LW(p) · GW(p)
I'm not sure this poll is as anonymous as it should be for maximum accuracy. If votes are ever de-anonymized, someone might swing by and look at this.
Replies from: komponisto, Jack↑ comment by komponisto · 2010-02-08T02:42:03.008Z · LW(p) · GW(p)
Solution: never de-anonymize votes retroactively.
↑ comment by Zack_M_Davis · 2010-02-05T07:38:36.453Z · LW(p) · GW(p)
or [vote the comment] down to -2, otherwise your opponent might infer that you are the one who downvoted it. [...] [T]he counterintuitive, but strategically correct response is to downvote more.
(Downvoted. EDIT: Vote cancelled; see below.) "Opponent"? "Strategically correct response"? Are you sure we're playing the same game?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-05T08:38:35.319Z · LW(p) · GW(p)
I don't understand why lately my comments have been so often uncharitably interpreted. In this case, my "game" is:
- not wanting to be falsely accused of unfair downvoting (either publicly or just in other people's minds)
- not wanting to see others being falsely accused of unfair downvoting
- not wanting to see community members become enemies due to this kind of problem
↑ comment by Zack_M_Davis · 2010-02-05T08:58:28.211Z · LW(p) · GW(p)
(Upvoted.) ... okay, maybe my comment was in poor taste. What I was trying to get at is that there's something very---can I say odd?---about downvoting in order to avoid the appearance of having downvoted.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-05T09:11:52.248Z · LW(p) · GW(p)
Well, the way I see it, votes are meant to convey information. When a comment is at -1, we (and the author of that comment) don't know if it was downvoted by the opponent of the author, or by someone independent. When it's at -2, we know at least one independent person downvoted it, so that's much more useful information.
Not to dump this on you, but I'm getting a bit frustrated at how often my comments are interpreted in the worst possible light, instead of given the benefit of doubt. After your criticism, it took me tens of minutes to think of a reply that I could be sure wouldn't gather further negative comments or downvotes.
If anyone has ideas what I could do about this, I'd really appreciate it. Otherwise I'm considering taking a break for a while. (ETA: I've decided to refrain from mentioning karma again, since that seems to be the main trigger, or to only do so with extreme caution.)
Replies from: bgrah449, Zack_M_Davis, ciphergoth, mattnewport↑ comment by Zack_M_Davis · 2010-02-05T17:48:58.206Z · LW(p) · GW(p)
I'm sorry; I was being unfair. Downvote my first comment in this thread, please.
I also had a moment of self-awareness this morning---I just criticized you for voting for social-strategic reasons rather than solely the merit of the comment, but surely I was doing the same sort of thing that time when I upvoted Toby Ord even though I thought his comment was terrible because Toby Ord is a hotshot academic and I don't want him to think poorly of this community! Although speaking of self-awareness, maybe I should also mention that from introspection I can't tell if I would be having this same response if you weren't the eminent Wei Dai ...
Augh! Could it be that we at Less Wrong are smart enough to avoid all the ordinary status games, but not smart enough to avoid the meta recursive anti-status status games? O horror; O terrible humanity!
Replies from: Wei_Dai, wedrifid↑ comment by Wei Dai (Wei_Dai) · 2010-02-06T07:27:53.113Z · LW(p) · GW(p)
In further retrospect, it seems clear that what I called "frustration" contained a large element of being offended, i.e., thinking that I wasn't given an amount of benefit of doubt befitting my status. Hopefully I gained enough control of my emotions to limit the damage this time. As you say, O horror; O terrible humanity!
BTW, the reason I used "strategically correct" was to reference the past game theory discussions. I thought it would be interesting to point out another counterintuitive advice given by game theory.
↑ comment by wedrifid · 2010-02-05T18:04:06.150Z · LW(p) · GW(p)
Augh! Could it be that we at Less Wrong are smart enough to avoid all the ordinary status games
Not even close. It takes a lot of intellectual effort to keep track of what is actually going on in the conversations, even if they are slightly less 'Wrong' here.
I upvoted Toby Ord even though I thought his comment was terrible because Toby Ord is a hotshot academic
Toby is a hot shot academic? Now that fits things together somewhat better.
Replies from: thomblake↑ comment by Paul Crowley (ciphergoth) · 2010-02-05T09:18:09.042Z · LW(p) · GW(p)
FWIW I hope the result is that you don't feel forced away.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-05T09:51:57.986Z · LW(p) · GW(p)
Thanks for the moral support, but I think what I need more is insights and ideas. :) Maybe I'll just stay away from anything meta, or karma related. In retrospect that seems to be what got me into trouble recently.
↑ comment by mattnewport · 2010-02-05T09:14:34.338Z · LW(p) · GW(p)
Your karma is currently 6218. If you are worrying about downvotes at that level I think perhaps you are placing undue weight on karma.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-05T09:22:53.990Z · LW(p) · GW(p)
So do you think I should have just ignored Zack's comment, or fired off the first defense that came to mind (which probably would have gotten me deeper into trouble)? Or something else?
Replies from: mattnewport↑ comment by mattnewport · 2010-02-05T09:32:57.817Z · LW(p) · GW(p)
My general strategy is to say what I think, moderated slightly by the desire to avoid major negative karma (I hold back on the most offensive responses that occur to me). On average I get positive karma. If my karma started to trend downwards I'd consider revising my tone but I don't think it is productive to worry about the occasional downvote. In fact, without the occasional downvote I would worry that I wasn't adding anything to the conversation.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-05T09:42:20.749Z · LW(p) · GW(p)
In this case it wasn't just a downvote, it was a downvote backed up by a reason from someone that I respect. That's pretty hard to ignore...
Replies from: mattnewport↑ comment by mattnewport · 2010-02-05T09:54:41.551Z · LW(p) · GW(p)
I'm not sure I can give you useful advice because I don't seem to ascribe the same meaning to karma.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-05T10:35:26.851Z · LW(p) · GW(p)
Here's my reply, after some reflection. The reason I strive for having no comments with negative scores is so that when people see a comment from me that is confusing, controversial or just seems wrong (of course I try to prevent that if possible, but sometimes it isn't), they'll think "It's not like Wei to write nonsense. Maybe I should think about this again" instead of just dismissing it. That kind of power seems worth the effort to me. (Except that it hasn't been working well recently, hence the frustration.)
↑ comment by Alicorn · 2010-02-05T05:02:07.333Z · LW(p) · GW(p)
I've noticed this too. It is one of several annoying problems that would evaporate if votes weren't anonymous.
Replies from: wedrifid, arbimote, wedrifid↑ comment by arbimote · 2010-02-06T12:09:05.889Z · LW(p) · GW(p)
Perhaps keep anonymous votes too, but make them worth less or only use them to break ties.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-02-06T12:15:44.888Z · LW(p) · GW(p)
It should be enough to let people volunteer to be non-anonymous voters to change the reputational impact (then, big enough inadequacy of the karma proxy will become visible).
↑ comment by wedrifid · 2010-02-05T18:09:50.466Z · LW(p) · GW(p)
Parent voted back up strategically. ;)
↑ comment by Jack · 2010-02-07T22:44:54.831Z · LW(p) · GW(p)
Do you not think there is sometimes reason to downvote a debating opponent?
Replies from: orthonormal↑ comment by orthonormal · 2010-02-07T23:18:59.308Z · LW(p) · GW(p)
I see the heuristic "don't downvote in an argument you're participating in" as a good one for the kind of corrupted hardware we're running on (as in the Ends Don't Justify Means (Among Humans) post). Given that I could gain or lose (perceived) status in an argument, I'm apt to be especially biased about the quality of people's comments in said argument. I value the prospect of providing more fair and accurate karma feedback in general, even if that means going against object-level intuitions in particular cases.
Usually, if I'm arguing with someone, and their reply is really as bad as it looks to me, several others will see that and downvote it anyway. If this happens and it hits -4 or so, then I feel justified in marking my opinion. In all other cases, I prefer to give the benefit of the doubt.
comment by Vladimir_Nesov · 2010-02-05T10:39:07.359Z · LW(p) · GW(p)
LW became more active lately, and grew old as experience, so it's likely I won't be skimming "recent comments" (and any comments) systematically anymore (unless I miss the fun and change my mind, which is possible). Reliably, I'll only be checking direct replies to my comments or private messages (red envelope).
A welcome feature to alleviate this problem would be an aggregator for given threads: functionality to add posts, specific comments and users in a set of items to be subscribed on. Then, all comments on the subscribed posts (or all comments within depth k from the top-level comments), and all comments within the threads under subscribed comments should appear together as "recent comments" do now. Each comment in this stream should have links to unsubscribe from the subscribed item that caused this comment to appear in the stream, or to add an exclusion on the given thread within another subscribed thread. (Maybe, being subscribed to everything, including new items, by default, is the right mode, but with ease of unsubscribing.)
This may look like a lot, but right now, there is no reading load-reducing functionality, so as more people start actively commenting, less people will be able to follow.
Replies from: ciphergoth, Vladimir_Nesov, Wei_Dai↑ comment by Paul Crowley (ciphergoth) · 2010-02-05T10:49:45.776Z · LW(p) · GW(p)
I find myself once again missing Usenet.
Perhaps if LW had an API we could get back to writing specially-designed clients, which could do all the aggregation magic we might hope for?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-02-05T11:01:04.606Z · LW(p) · GW(p)
"Recent comments" page has a feed.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-05T11:23:09.758Z · LW(p) · GW(p)
I was hoping for a rather richer API than that. "Recent comments" doesn't even include scores.
Replies from: matt↑ comment by matt · 2010-02-06T09:02:27.046Z · LW(p) · GW(p)
That's a trivial mod that Trike has time for. Do you want to specify what data you would like in an API, or try and get the code working yourself?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-06T09:39:16.960Z · LW(p) · GW(p)
I really should try and do it myself - for one thing, that means I can develop server and client in parallel.
↑ comment by Vladimir_Nesov · 2010-02-10T21:46:29.289Z · LW(p) · GW(p)
Apparently even specific users have their own rss feeds, so I've settled with a feed aggregated from the feeds of a few people. It'd be better if the "friend" functionality worked (maybe it even does, but I don't know it!), so that the same could be done within the site, with voting and parent/context links.
↑ comment by Wei Dai (Wei_Dai) · 2010-02-05T10:50:16.489Z · LW(p) · GW(p)
An easier to implement feature that would also help alleviate this problem is to have the system remember the last comment read, and then have an option to display all new comments since then in a threaded fashion on one big page, so we can skip whole threads of new comments at once. (I have been thinking about this, and started writing a PHP script to scrape Less Wrong and build the threaded view, but gave up due to technical difficulties.)
Also, I think comments on one's posts should activate the red envelope, but don't right now. Should we private message you if we answer one of your posts and want a reply?
Replies from: matt↑ comment by matt · 2010-02-06T09:04:32.193Z · LW(p) · GW(p)
It's worth sending me requests for access like that. Trike is short on time, but very keen on any time we can spend with a string multiplier on your time. What do you want in an API?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-06T09:53:46.689Z · LW(p) · GW(p)
Basically, I think what's needed is an API to retrieve a list of comments satisfying some query as an XML document. I'm not sure what kind of queries the system supports internally, so I'll just ask for as much generality and flexibility as possible. For example, I'd like to be able to search by a combination of username, post ID, date, points (e.g., all comments above some number of points), and comment ID (e.g., retrieve a list of comments given a list of IDs, or all comments that come after a certain ID).
If that's too hard, or development time is limited, I would settle now for just a way to retrieve all comments that come after a certain comment ID, and doing additional filtering on the client side.
Also, while I have your attention, where can I find some documentation about the Less Wrong codebase? I tried to read it once, and found it quite hard to understand, and was wondering if there's a guide to it somewhere.
Replies from: Douglas_Knight, matt↑ comment by Douglas_Knight · 2010-02-06T17:52:10.646Z · LW(p) · GW(p)
just a way to retrieve all comments that come after a certain comment ID
There is an API for that, but it's broken. this (rss) should get you the 40 comments later than comment number 1000, but it gives 50, regardless of how many you ask for. Also, it rarely gives a link to go to the later comments (only for earlier ones). but if you've been walking these things, you probably knew that.
ETA: I misinterpreted the API. "count" is not supposed to control the number of comments, but as a hint to the server about how far back. If that hint is missing or wrong, it leaves out prev/next. Especially prev. You can make prev appear by adding &count=60 (anything over 50), but every time you click prev, it will decrease this number by 50 and eventually not give the prev. You could make it very large.
Replies from: byrnema↑ comment by byrnema · 2010-02-06T19:38:45.882Z · LW(p) · GW(p)
Would I modify this, or something else, to get the first comment of a particular user?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-02-06T21:55:59.935Z · LW(p) · GW(p)
Would I modify this, or something else, to get the first comment of a particular user?
You can stick ?before=t1_1 onto the end of a user page to get the first comment. yours
Replies from: byrnema↑ comment by byrnema · 2010-02-06T22:13:06.819Z · LW(p) · GW(p)
Awesome! I occasionally want to skim through someone's posts chronologically, or at least read their first few comments, to see how their views might have changed over time, and see to what extent I can tell the state of mind they were in when they arrived here.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-02-06T22:25:54.682Z · LW(p) · GW(p)
Since this interface is broken, it's not so easy to skim. The page is supposed to have a "prev"[1] link at the bottom, but it doesn't.
ETA: better for skimming is to add not just ?before=t1_1 to the user page, but also &count=100000
[1] I hate the use of prev/next, at least because it isn't standard (eg, it's opposite to livejournal). "earlier" and "later" would be clear.
↑ comment by matt · 2010-02-06T20:00:46.677Z · LW(p) · GW(p)
Also, while I have your attention, where can I find some documentation about the Less Wrong codebase?
http://github.com/tricycle/lesswrong - see "Resources" at bottom of page, mostly this (which is a wiki, so if you learn more, please share).
comment by denisbider · 2010-02-05T01:48:29.793Z · LW(p) · GW(p)
While the LW voting system seems to work, and it is possibly better than the absence of any threshold, my experience is that the posts that contain valuable and challenging content don't get upvoted, while the most upvotes are received by posts that state the obvious or express an emotion with which readers identify.
I feel there's some counterproductivity there, as well as an encouragement of groupthink. Most significantly, I have noticed that posts which challenge that which the group takes for granted get downvoted. In order to maintain karma, it may in fact be important not to annoy others with ideas they don't like - to avoid challenging majority wisdom, or to do so very carefully and selectively. Meanwhile, playing on the emotional strings of the readers works like a charm, even though that's one of the most bias-encouraging behaviors, and rather counterproductive.
I find those flaws of some concern for a site like this one. I think the voting system should be altered to make upvoting as well as downvoting more costly. If you have to pick and choose what comments and articles to upvote/downnvote, I think people will be voting with more reason.
There are various ways to make voting costlier, but an easy way would be to restrict the number of votes anyone has. One solution would be for votes to be related to karma. If I've gained 500 karma, I should be able to upvote or downvote F(500) comments, where F would probably be a log function of some sort. This would both give more leverage to people who are more active contributors, especially those who write well-accepted articles (since you get 10x karma per upvote for that), and it would also limit the damage from casual participants who might otherwise be inclined to vote more emotionally.
Replies from: orthonormal, AndyWood, mattnewport, denisbider↑ comment by orthonormal · 2010-02-05T08:16:13.530Z · LW(p) · GW(p)
If I've gained 500 karma, I should be able to upvote or downvote F(500) comments, where F would probably be a log function of some sort.
Um, that math doesn't work out unless the number of new users expands exponentially fast. You need F(n) to be at least n, and probably significantly greater, in order to avoid a massive bottleneck.
Replies from: Cyan↑ comment by Cyan · 2010-02-05T14:30:31.067Z · LW(p) · GW(p)
I thought of that too, but then I realized the karma:upvote conversion rate on posts is 10:1, which complicates the analysis of the karma economy.
Replies from: denisbider↑ comment by denisbider · 2010-02-11T16:37:19.777Z · LW(p) · GW(p)
If F(n) < n, then yes, karma disappears from the system when voting on comments, but is pumped back in when voting on articles.
It does appear that the choice of a suitable F(n) isn't quite obvious, and this is probably why F(n) = infinite is currently used.
Still, I think that a more restrictive choice would produce better results, and less frivolous voting.
↑ comment by mattnewport · 2010-02-05T06:45:33.311Z · LW(p) · GW(p)
Are you aware that downvotes are already limited by karma? Limiting upvotes as well might have merit.
There probably needs to be a bias towards upvotes however, otherwise it will be very difficult to get significant positive karma.
↑ comment by denisbider · 2010-02-11T16:57:21.914Z · LW(p) · GW(p)
See what I mean about the voting system being broken?
http://lesswrong.com/lw/1r9/shut_up_and_divide/1lxw
Currently voted -2 and below threshold.
Completely rational points of view that people find offensive cannot be expressed.
This is a site that is supposed to be about countering bias. Countering bias necessarily involves assaulting our emotional preconceptions which are the cause of falsity of thought. Yet, performing such assaults is actively discouraged.
Does that make this site Less Wrong, or More Wrong?
Replies from: Morendil, MrHen, mattnewport, byrnema↑ comment by Morendil · 2010-02-11T17:14:10.048Z · LW(p) · GW(p)
You're getting downvoted for overconfidence, not for the content of your point of view.
The utilitarian point of view is that beyond some level of salary, more money has very small marginal utility to an average First World citizen, but would have a huge direct impact in utility on people who are starving in poor countries.
Your point is that the indirect impacts should also be considered, and that perhaps when they are taken into account the net utility increase isn't so clear. The main indirect impact you identify is increasing dependency on the part of the recipients.
Your concern for the autonomy of these starving people is splendid, but the fact remains that without aid their lives will be full of suffering. Your position appears to be "good riddance". You can't fault people for being offended at the implied lack of compassion.
I suspect that your appeal for sympathy towards your position is doubly likely to fall on deaf ears as a result. Losing two karma points isn't the end of the world, and does not constitute suppression. Stop complaining, and invest some effort in presenting your points of view more persuasively.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-11T18:08:19.586Z · LW(p) · GW(p)
You're getting downvoted for overconfidence, not for the content of your point of view.
If denis is just being overconfident, couldn't we just say "you're being overconfident here, probably because you neglected to consider ..." and reserve downvotes for trolls and nonsense (i.e., comments that clearly deserve to be hidden from view)?
Replies from: Morendil, CarlShulman↑ comment by Morendil · 2010-02-11T18:25:14.576Z · LW(p) · GW(p)
Downvotes signal "would like to see fewer comments like this one". This certainly applies to trolls and nonsense, but it feels appropriate to use the same signal for comments which, if the author had taken a little more time to compose, readers wouldn't need to spend time correcting one way or another. The calculation I've seen at least once here (and I tend to agree with) is that you should value your readers' time about 10x more than you value yours.
The appropriate thing to do if you receive downvotes and you're neither a troll nor a crackpot seems to simply ask what's wrong. Complaining only makes things worse. Complaining that the community is exhibiting censorship or groupthink makes things much worse.
↑ comment by CarlShulman · 2010-02-11T18:35:01.645Z · LW(p) · GW(p)
Looking at the comment in question, Denis claims that charity "only" rewards bad things and discourages good ones. That is nonsense on its face, and it's combined with mind-killing politics: ideological libertarianism about the immorality of paying taxes that benefit those labelled dysfunctional. I agree with Robin Hanson on this point.
↑ comment by MrHen · 2010-02-11T19:38:08.525Z · LW(p) · GW(p)
See what I mean about the voting system being broken?
Honestly, the system is doing exactly what it is supposed to be doing. If you think it is broken, I suspect you are expecting it to do something other than its purpose.
When I get frustrated by the karma system it is because I keep wanting more feedback than it provides. But this is a problem with me, not a problem with the system.
Replies from: Rain↑ comment by Rain · 2010-02-11T19:41:43.282Z · LW(p) · GW(p)
But this is a problem with me, not a problem with the system.
Then the solution to this "problem" would be to not care as much about feedback, or to want less feedback than you think you should get? Couldn't it also be addressed by adding mechanisms for providing more feedback? I don't see why the problem has to be on one particular side when solutions could be had either way.
ETA: I agree that denisbider appears to be injecting too much political thinking into his comments and calling it rationality, while not providing adequate support of his positions outside of said politics, and that the karma system is justifiably punishing him for it.
I can also see a potential for the current system to have a 'delicate balance' where changes trying to improve it could be offset by negative outcomes, but I don't think that case has been made.
Replies from: denisbider, thomblake, MrHen↑ comment by denisbider · 2010-02-11T20:01:27.060Z · LW(p) · GW(p)
(1) I don't see my comments as being political. If you perceive them as injecting politics, then I suspect it's because you are used to hearing similar things in a more political environment. My comments are about reason, empathy, charity, value systems, and how they fit together.
(2) I am unable to substantiate my positions if people don't respond. When people do respond, then I have some understanding of the differences between my viewpoint and theirs, and can substantiate. But I don't believe it's reasonable to expect all possible counter-arguments to be preempted in a comment of reasonable length.
(3) The "delicate balance" argument is specious - it is a form of bias in favor of what already exists. If we had a different system, then you would be calling that system a "delicate balance".
Replies from: Rain↑ comment by Rain · 2010-02-11T20:09:06.779Z · LW(p) · GW(p)
1) Some of your earlier comments, especially those most negatively rated, set off all of my "political talking points" alarm bells. I note that many of your later comments aren't so rated, and that you seem to be improving in your message-conveyance.
2) Your replies to replies seem to be going fairly well so far.
3) I agree that it is only potential. Thomblake posted a good link on that very topic, and it is also why I said the case had not been made, and put the phrase in quotes. However, calling it specious and saying I would agree with any system is exactly the sort of thing I was talking about. Just because it's a potential bias doesn't mean that it is necessarily in effect, nor that its effects are so strong that it shows things are obviously broken. We do a lot of probabilistic thinking around here...
Replies from: denisbider↑ comment by denisbider · 2010-02-11T20:29:54.866Z · LW(p) · GW(p)
Since we're doing probabilistic thinking, I would assign a great probability to the current system being imperfect, simply because (1) it is the system with which the site was designed prior to developing experience, and (2) the system is observed to have faults.
These faults seem to be fixable by making voting costlier, prompting readers to invest more thought when they decide to vote. I don't even expect that this would necessarily improve my karma, but I think it would increase thoughtfulness, decrease reactivity, and improve quality overall.
There should probably be a daily limit to how many comments people can make, too. I think it would encourage longer and more thoughtful comments rather than shorter and more reactive ones.
Replies from: thomblake↑ comment by thomblake · 2010-02-11T20:37:34.420Z · LW(p) · GW(p)
it is the system with which the site was designed prior to developing experience
Patently false.
There should probably be a daily limit to how many comments people can make, too. I think it would encourage longer and more thoughtful comments rather than shorter and more reactive ones.
I disagree on both points.
↑ comment by thomblake · 2010-02-11T19:54:27.729Z · LW(p) · GW(p)
I can also see a potential for the current system to have a 'delicate balance' where changes trying to improve it could be offset by negative outcomes, but I don't think that case has been made.
Related: Reversal test
ETA: on second thought, more related: status quo bias
↑ comment by mattnewport · 2010-02-11T17:20:07.748Z · LW(p) · GW(p)
I already upvoted you before reading this comment. It can take a little time for votes to settle. Also, you can set your threshold to a different value. The default is less than -2.
Replies from: denisbider↑ comment by byrnema · 2010-02-11T17:54:14.473Z · LW(p) · GW(p)
Incidentally, I also up-voted your comment about how charity is unhelpful because it enables helplessness (even though I disagree) because I definitely think its valuable to have both arguments represented. However, I did expect your comment would be down-voted because my impression is that the group here has already considered Ayn Rand and disagree with her ideologically. I wouldn't say they found your comment offensive ... there's just certain themes that are developed here more than others and that was an anti-theme note.
Do you think having certain 'group themes' is bad for rationality?
Replies from: denisbider↑ comment by denisbider · 2010-02-11T19:07:46.177Z · LW(p) · GW(p)
My observations aren't Randian in origin. At least, I haven't read her books; I even somewhat disapprove of her, from what I know of her idiosyncrasies as a person.
I do think that this is an important topic for this group to consider, because the community is about rationality. My observation is that many commenters seem to not be realizing the proper role of empathy in our emotional spectrum, and are trying to extent their empathy to their broader environment in ways that don't make sense.
Also, if my anti-empathy comment is being downvoted because it isn't part of a group theme, then the pro-empathy comments should be downvoted as well, but they are not. This indicates that people vote based on what they agree with, whether it is in context or not - and not based on what is in context, and/or provides food for thought.
Replies from: byrnema, tut, mattnewport↑ comment by byrnema · 2010-02-11T20:48:14.368Z · LW(p) · GW(p)
Also, if my anti-empathy comment is being downvoted because it isn't part of a group theme, then the pro-empathy comments should be downvoted as well, but they are not.
This indicates you haven't understood me: pro-empathy IS the theme here on Less Wrong. For a variety of reasons, this community tends to have 'humanist goals'. This is considered to not be in conflict with rationality, because rationality is about achieving your goals, not choosing them. If you have a developed rational argument for why less charity would further humanist goals, there may be some interest, but much less interest if your argument seems based on a lack of humanist goals.
Replies from: denisbider↑ comment by denisbider · 2010-02-11T21:11:51.077Z · LW(p) · GW(p)
But the definition of "humanity" isn't even coherent, and is actually incompatible with shades of gray that actually exist.
Until these fundamentals are thought out, there can be lots of hot air, but progress toward a goal cannot be made, as long as the goal is incoherent.
It seems to me that the type of humanism you're talking about is based on an assumption that "other people are like me, and should therefore be just as valuable to me as I am".
But other people, especially of different cultures and genetic heritage, have strikingly different values, strikingly different perceptions, different capacities to think, understand and create.
The differences are such that drawing the compassion line at the borders of the human race makes about as much sense as at any other arbitrary point in the biological spectrum.
I believe that, to be consistent in valuing empathy as a goal on its own, you have to have empathy with everything. I find that a laudable position. But the sad fact is, most of us here aren't vegan, nor do even want to be. (I would be if most people were.)
People are selfish, and do not have empathy for everything. In fact, most people pretend to have empathy for the world as a whole, whereas in fact they only have empathy for the closest people around them, and perhaps not even them, when push comes to shove.
All that having been said, and the world being as selfish as it is, when you say that you're a humanist, that you want to better the lot of other people, and that you contribute 50% of your income to charity (just as an example), you are basically saying that you're a sucker, and that your empathic circuits are so out of control that you let other people exploit you.
Given that we are the way we are, I think a much more reasonable goal is to foster a world that shares our values, not to foster the existence of the arbitrary people who don't share our values, but exist today.
↑ comment by tut · 2010-02-11T19:12:43.958Z · LW(p) · GW(p)
People do to some extent vote based on what they agree with, and at least a few make no bones about that. But people also vote based on style. Based on if it feels like you are trying to learn and contribute to our learning or trying to appear superior and gain status. You look like the latter to me. And I think that you could be arguing the same things, in ways that are no less honest, and get positive karma if you just use different words.
Replies from: denisbider↑ comment by denisbider · 2010-02-11T19:37:10.538Z · LW(p) · GW(p)
I hear Socrates wasn't popular either.
I'm no Socrates, but focusing on style instead of essence is incorrect.
Some of the best lessons I've learned were from people who were using a very blunt style.
I am not trying to appear superior, nor to gain status. If I wanted that, I would not be using a style which I know is likely to antagonize. I use a blunt style at the expense of my status and for the benefit of the message, not the other way around.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-11T19:57:00.902Z · LW(p) · GW(p)
You're saying some things which I've considered attempting to say but have self-censored to some extent due to expecting negative karma. You aren't necessarily saying them in exactly the way I would have tried to put it, and I don't necessarily agree with everything you've been saying but I broadly agree and have been upvoting most of your recent posts.
Replies from: Rain, denisbider↑ comment by Rain · 2010-02-11T20:01:30.973Z · LW(p) · GW(p)
I agree with much of what he seems trying to convey. However, in many cases, the style is far too reminiscent of political talking points. Bluntness is useful insofar as it simplifies a message to its essential meaning. Talking points corrupt that process by injecting emotional appeals and loaded terms.
Replies from: denisbider↑ comment by denisbider · 2010-02-11T20:24:23.627Z · LW(p) · GW(p)
Perhaps I would know better to avoid that if I was more exposed to US culture, but I am originally from Europe and I tend to abhor political wars for their vacuousness, so perhaps I'm using words in ways that reminisce of politics inadvertently.
Replies from: Rain↑ comment by Rain · 2010-02-11T20:36:26.892Z · LW(p) · GW(p)
To remove the word "politics" from my description: You seem very sure of yourself, to the point where it seems you are not taking uncertainty into account where you should be. The views you express seem to be statements about the world, as if they were facts, when discussing things like utilitarian value of certain actions, when there are competing views on the topic, and you do a disservice to the discussion by failing to mention or explain why your opinions are better than the competing theories, or even acknowledging that they are opinions.
You don't provide the evidence; you provide a statement of "fact" in isolation, sometimes going so far as to claim special knowledge and ask the audience to do things you know very well are not going to make for an easy or quick discussion (like, "Go spend a few years in Africa.") I found that my alarms deactivated for your response to my comment that we think probabilistically, because the claims were testable and better labeled.
Replies from: denisbider, CarlShulman↑ comment by denisbider · 2010-02-11T21:49:49.110Z · LW(p) · GW(p)
Points taken, thank you.
↑ comment by CarlShulman · 2010-02-11T20:43:40.545Z · LW(p) · GW(p)
I was also moved by these concerns, and find comments sharing these general traits to degrade norms of discussion (e.g. clarity, use of evidence, distinguishing between normative and descriptive claims).
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-12T16:29:47.873Z · LW(p) · GW(p)
Perhaps we need a post setting out these norms clearly, so we can point newcomers to it?
Replies from: Rain, komponisto↑ comment by komponisto · 2010-02-12T16:53:47.071Z · LW(p) · GW(p)
A wiki entry would probably be the appropriate solution.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-12T16:55:49.842Z · LW(p) · GW(p)
As with most things, it should probably be a top-level article first and a wiki entry second...
↑ comment by denisbider · 2010-02-11T20:19:31.906Z · LW(p) · GW(p)
Thanks Matt. I generally try to take this role because I'm aware that the character traits that allow me to do this are somewhat rare, and that the role is valuable in balance.
I'm also aware of the need to improve my skills of getting the message across, but this takes time to develop.
↑ comment by mattnewport · 2010-02-11T19:16:22.195Z · LW(p) · GW(p)
There is some relevant discussion of the issue of how our empathy/instinctive moral reactions conflict with efficient markets in this interview with Hayek. The whole thing is worth watching but the most relevant part of the interview to this discussion starts at 45:25. Unfortunately Vimeo does not support links directly to a timestamp so you have to wait for the video to load before jumping to the relevant point.
ETA a particularly relevant quote:
But we are up against this very strong, and in a sense justified resistance of our instincts and that's our whole problem. A society which is efficient cannot be just. And unfortunately a society which is not efficient cannot maintain the present population of the world. So I think our instincts will have to learn. We shall perhaps for generations still be fighting the problem and fluctuating from one position to the other.
I know exactly why the majority of people do not like the kind of relative status which a free competitive society produces. But every time they try to correct this they start on a course where to apply the same principle universally destroys the whole system.
Now I think that perhaps for the next 200 years we will be fluctuating from the one direction to the other. Trying to satisfy our feeling of justice, and leading away from efficiency, finding out that in trying to cure poverty we really increase poverty, then returning to the other system, a more effective system to abolish poverty, but on a more unjust principle. And how long it will have to last before we learn to discipline our feelings I can't predict.
comment by Cyan · 2010-02-01T15:21:25.232Z · LW(p) · GW(p)
If I understand the Many-Worlds Interpretation of quantum mechanics correctly, it posits that decoherence takes place due to strict unitary time-evolution of a quantum configuration, and thus no extra collapse postulate is necessary. The problem with this view is that it doesn't explain why our observed outcome frequencies line up with the Born probability rule.
Scott Aronson has shown that if the Born rule doesn't hold, then quantum computing allows superluminal signalling and the rapid solution of PP)-complete problems. So we could adopt "no superluminal signalling" or "no rapid solutions of PP-complete problems" as an axiom and this would imply the Born probability rule.
I wanted to ask of those who have more knowledge and have spent longer thinking about MWI: is the above an interesting approach? What justifications could exist for such axioms? (...maybe anthropic arguments?)
ETA: Actually, Aronson showed that in a class of rules equating probability with the p-norm, only the 2-norm had the properties I listed above. But I think that the approach could be extended to other classes of rules.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T17:55:21.163Z · LW(p) · GW(p)
Non-Born rules give us anthropic superpowers. It is plausibly the case that the laws of reality are such that no anthropic superpowers are ever possible, and that this is a quickie explanation for why the laws of reality give rise to the Born rules. One would still like to know what, exactly, these laws are.
To put it another way, the universe runs on causality, not modus tollens. Causality is rules like "and then, gravity accelerates the bowling ball downward". Saying, "Well, if the bowling ball stayed up, we could have too much fun by hanging off it, and the universe won't let us have that much fun, so modus tollens makes the ball fall downward" isn't very causal.
Replies from: Cyan↑ comment by Cyan · 2010-02-02T00:52:21.200Z · LW(p) · GW(p)
This reminds me of an anecdote I read in a biography of Feynman. As a young physics student, he avoided using the principle of least action to solve problems, preferring to solve the differential equations. The nonlocal nature of the variational optimization required by the principle of least action seemed non-physical to him, whereas the local nature of the differential equations seemed more natural.*
I wonder if there might not be a more local and causal dual representation of the principle of no anthropic superpowers. Pure far-fetched speculation, alas.
* If this seems vaguely familiar to anyone, it's because I'm repeating myself.
comment by JamesAndrix · 2010-02-01T06:43:10.797Z · LW(p) · GW(p)
An ~hour long talk with Douglass Hofstadter, author of Godel, Escher, Bach.
Titled: Analogy as the Core of Cognition
comment by Cyan · 2010-02-02T14:27:37.843Z · LW(p) · GW(p)
"Cf." is sometimes misused around here.
Replies from: Zack_M_Davis, RobinZ↑ comment by Zack_M_Davis · 2010-02-03T03:16:56.068Z · LW(p) · GW(p)
Okay, yes, bad habit. I'll stop, I'll stop!
comment by Scott Alexander (Yvain) · 2010-02-01T12:39:10.225Z · LW(p) · GW(p)
Fun sneaky confidence exercise (reasons why exercise is fun and sneaky to be revealed later):
Please reply to this comment with your probability level that the "highest" human mental functions, such as reasoning and creative thought, operate solely on a substrate of neurons in the physical brain.
Replies from: Jayson_Virissimo, ciphergoth, HalFinney, Morendil, SilasBarta, CronoDAS, Morendil, Zack_M_Davis, Kaj_Sotala, byrnema, DonGeddis, CannibalSmith, RobinZ, ciphergoth, Jonii, arundelo, Morendil, Torben, ciphergoth, SilasBarta, ata, Jack, magfrump, FAWS, AndyWood, JamesAndrix↑ comment by Jayson_Virissimo · 2010-02-01T17:55:50.017Z · LW(p) · GW(p)
<.05
I am no cognitive scientist, but I believe some of my "thinking" takes place outside of brain (elsewhere in my body) and I am almost certain some of it takes place on my paper and computer.
Replies from: loqi, pjeby↑ comment by loqi · 2010-02-01T20:48:09.836Z · LW(p) · GW(p)
Speaking of "thinking" with neurons other than those found in the brain, kinesthetic learning gives me pause concerning the sufficiency of cranial preservation in cryonics. How much "index-like" information do we store in the rest of our neurons? Does this vary with one's level of kinesthetic dependence? Would waking up disconnected from the rest of our nervous system (or connected to a "generic" substitute) be merely disorienting, or could it constitute a significant loss of personality/memory? Neuroscientists, help!
Replies from: HalFinney, AdeleneDawner↑ comment by HalFinney · 2010-02-01T22:01:00.104Z · LW(p) · GW(p)
When I signed up for cryonics, I opted for whole body preservation, largely because of this concern. But I would imagine that even without the body, you could re-learn how to move and coordinate your actions, although it might take some time. And possibly a SAI could figure out what your body must have been like just from your brain, not sure.
Now recently I have contracted a disease which will kill most of my motor neurons. So the body will be of less value and I may change to just the head.
The way motor neurons work is there is an upper motor neuron (UMN) which descends from the motor cortex of the brain down into the spinal cord; and there it synapses onto a lower motor neuron (LMN) which projects from the spinal cord to the muscle. Just 2 steps. However actually the architecture is more complex, the LMNs receive inputs not only from UMNs but from sensory neurons coming from the body, indirectly through interneurons that are located within the spinal cord. This forms a sort of loop which is responsible for simple reflexes, but also for stable standing, positioning etc. Then there other kinds of neurons that descend from the brain into the spinal cord, including from the limbic system, the center of emotion. For some reason your spinal cord needs to know something about your emotional state in order to do its job, very odd.
Replies from: AdeleneDawner, loqi↑ comment by AdeleneDawner · 2010-02-01T22:11:42.065Z · LW(p) · GW(p)
Then there other kinds of neurons that descend from the brain into the spinal cord, including from the limbic system, the center of emotion. For some reason your spinal cord needs to know something about your emotional state in order to do its job, very odd.
Fascinating. Citation?
↑ comment by loqi · 2010-02-02T03:00:12.497Z · LW(p) · GW(p)
But I would imagine that even without the body, you could re-learn how to move and coordinate your actions, although it might take some time.
I'm much less worried by this than I am by the prospect that I'd have to do the same for many of my normal thought patterns due to unforeseen inter-dependencies.
And possibly a SAI could figure out what your body must have been like just from your brain, not sure.
Indeed, that's one of the reasons why I prefer thinking about it solely in terms of stored information: a redundant copy only really constitutes a pointer's worth of information. It's even conceivable that a SAI could reconstruct missing neural information in non-obvious ways, like a few stray frames of video. Not worth betting on, though.
Thanks for the informative reply.
↑ comment by AdeleneDawner · 2010-02-01T21:10:50.190Z · LW(p) · GW(p)
This was the first objection that my neuroscientist friend brought up when I tried to talk to him about (edit:) cryonics. I don't think science knows yet how dependent we are on our peripheral nervous system, but he seemed fairly sure that we are to a nontrivial degree.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-01T22:20:53.872Z · LW(p) · GW(p)
As I say to every objection I hear to cryonics at the moment, your neuroscientist friend should write a blog post or some such about his objections - he has a very low bar to clear to write the best informed critique in the world.
(Guessing you mean cryonics - cryogenics is something else though not unrelated)
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2010-02-01T22:24:49.214Z · LW(p) · GW(p)
I'll mention it to him.
(And, yes, oops.)
↑ comment by pjeby · 2010-02-01T20:20:52.503Z · LW(p) · GW(p)
I am no cognitive scientist, but I believe some of my "thinking" takes place outside of brain (elsewhere in my body) and I am almost certain some of it takes place on my paper and computer.
Voted up and seconded. Yvain, If what you actually mean is "operate solely through physical means contained within the human body or physical means manipulated by interaction with the human body," then I'll up it to whatever number is supposed to be used for, "I'm only leaving room for uncertainty because there's no such thing as certainty." ;-)
↑ comment by Paul Crowley (ciphergoth) · 2010-02-01T15:44:02.329Z · LW(p) · GW(p)
I'm at least +70 decibans ("99.99999%") confident that mental states supervene on to physical states. Whether your exact description to do with neurons in the brain completely captures all the physical states I'm less confident of.
EDIT: updated from 30 to 70 decibans: I would more easily be convinced that I had won the lottery than that this wasn't so.
Replies from: MichaelHoward, MichaelHoward↑ comment by MichaelHoward · 2010-02-08T20:04:05.263Z · LW(p) · GW(p)
updated from 30 to 70 decibans: I would more easily be convinced that I had won the lottery than that this wasn't so.
I might be misunderstanding what you mean by 'more easily be convinced', but if the nature of the evidence we'd expect to be doing the convincing is so different in each case, I don't think we can rely on that to tell how much we believe something.
I was much less easily convinced about Many Worlds that I would be that I'd won the lottery, but beforehand I think I'd have put the odds about the same as rolling a six.
↑ comment by MichaelHoward · 2010-02-08T19:55:50.022Z · LW(p) · GW(p)
updated from 30 to 70 decibans: I would more easily be convinced that I had won the lottery than that this wasn't so.
I don't think we can use an ease-of-convincing heuristic to compare deciban levels, if the nature of the evidence we'd expect to get is so different.
I was much less easily convinced about Many Worlds that I would be that I'd won the lottery, but beforehand I think I'd have put the odds about the same as rolling a six.
↑ comment by HalFinney · 2010-02-01T21:32:15.819Z · LW(p) · GW(p)
Like others, I see some ambiguity here. Let me assume that the substrate includes not just the neurons, but the glial and other support cells and structures; and that there needs to be blood or equivalent to supply fuel, energy and other stuff. Then the question is whether this brain as a physical entity can function as the substrate, by itself, for high level mental functions.
I would give this 95%.
That is low for me, a year ago I would probably have said 98 or 99%. But I have been learning more about the nervous system these past few months. The brain's workings seem sufficiently mysterious and counter-intuitive that I wonder if maybe there is something fundamental we are missing. And I don't mean consciousness at all, I just mean the brain's extraordinary speed and robustness.
↑ comment by SilasBarta · 2010-02-01T15:45:51.475Z · LW(p) · GW(p)
operate solely on a substrate of neurons in the physical brain.
As opposed to ...? Ion channels? Quantum phenomena? Multiple interacting brains? Non-neuronal tissue? Neuronal-but-extracranial cells? Soul? Beings outside the observable universe, running the simulator?
What is this belief supposed to be distinguished from?
Replies from: nawitus↑ comment by CronoDAS · 2010-02-01T20:29:17.576Z · LW(p) · GW(p)
To get nitpicky, the brain is made of both neurons and glial cells - and the glial cells also seem to play a role in cognition.
↑ comment by Morendil · 2010-02-01T15:30:36.087Z · LW(p) · GW(p)
I am quite comfortable with the idea that I am my brain, that my brain is made of ordinary living matter (atoms making up molecules making up proteins making up cells), that this matter forms specialized structures responsible for cognition, and I would be hugely surprised if given proof that the highest mental functions cannot be explained adequately in terms of that ontology. The strangest alternative I can think of is Penrose's ENM incomputable-quantum-coherence hypothesis and I'd assign less than 5% probability to his thesis being correct.
↑ comment by Zack_M_Davis · 2010-02-01T18:13:18.127Z · LW(p) · GW(p)
Commenting before reading other replies---I'm going to give the boring, sneaky reply that the question isn't well-specified enough to have an answer; I'd need to know more about what you mean by something to operate solely on a substrate. I mean, clearly there are a lot of cognitive tasks that most people can only do given a pencil and paper, or a computer ... is that the sneaky part, that we store information in the environment, and therefore we're not solely neurons?
↑ comment by Kaj_Sotala · 2010-02-01T13:35:17.004Z · LW(p) · GW(p)
How does "operate solely on" regard distributed cognition arguments, like "creative thought is created via interaction with the remaining human culture" and "we constantly offload cognitive processes (such as memory) to external substrates (like computers and books)"?
Also, the "highest" human mental functions operate via a number of lower-level processes. Does "solely on human neurons" include e.g. possible quantum phenomena on a low level?
Replies from: Morendil↑ comment by byrnema · 2010-02-01T12:57:58.950Z · LW(p) · GW(p)
Could you clarify what you mean by operate on? Or is that part of the point?
Using the definition of 'operate on' that I think is most natural, I'd say there is a .05% chance that these functions only operate on (effect) the physical brain. Unless you mean directly, and then I would assign an 80% chance.
Using the definition of 'operating on' meaning 'requiring', I'd say that there is a 90% chance (probability) that only the brain is required for 90% (fraction) of its functioning. The probabilities I assign would fall down dramatically as you try to raise the 2nd 90% (the fraction). So that I would probably only assign a 1% chance that 100% of higher functions require only the brain.
Replies from: byrnema↑ comment by byrnema · 2010-02-01T16:03:17.011Z · LW(p) · GW(p)
Given the variety of answers here -- further outside what I had considered -- I should qualify that whenever I was thinking of 'beyond the brain' I still meant within the body; like my spinal cord, heart and endocrine system being involved.
↑ comment by DonGeddis · 2010-02-01T23:52:33.205Z · LW(p) · GW(p)
With a straightforward interpretation of your question, I'd answer "95%".
But since you made special mention of being "sneaky", I'll assume you've attempted to trick me into misunderstanding the question, and so I'll lower my probability estimate to 75%, with the missing twenty points accounting for you tricking me by your phrasing of the question.
↑ comment by CannibalSmith · 2010-02-01T22:40:21.987Z · LW(p) · GW(p)
One minus epsilon.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-02-01T22:47:28.763Z · LW(p) · GW(p)
Do you mean that for every epsilon greater than 0, your assigned probability is at least one minus epsilon? If so, you might as well just say one, which isn't a probability.
Replies from: RobinZ↑ comment by RobinZ · 2010-02-01T20:39:43.128Z · LW(p) · GW(p)
"Highest" confidence is 100%, when the brain does not implement any consideration of failure.
Next highest increment is over 99%, I suspect. Call that my guess.
Edit: I'm an idiot - ignore this response. I thought it was asking "what is the highest confidence level that the brain implements in considering the probability of a proposition", which is different and interesting.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-01T22:21:38.919Z · LW(p) · GW(p)
I don't think the brain really implements proper probablistic confidence levels.
Replies from: RobinZ↑ comment by Paul Crowley (ciphergoth) · 2010-02-01T15:26:27.605Z · LW(p) · GW(p)
So not the spinal cord, for example?
↑ comment by arundelo · 2010-02-01T12:51:42.220Z · LW(p) · GW(p)
90%
Replies from: arundelo↑ comment by arundelo · 2010-02-01T23:38:30.989Z · LW(p) · GW(p)
I'm writing this comment after coming up with my probability level but before reading anyone else's responses.
Until Yvain's question, I had not put a number on this. I suspect if there were a machine that could measure how confident I "really" am, it would show a higher number.
I spent less than a minute translating from my previous estimate of "highly confident but not certain" to a percentage. Things I considered that made the probability higher:
- Every time humans have figured out how something works, the explanation has been a reductionist one.
- The only reason to think that the mind would be an exception to this is that the mind is unique in other ways (qualia/subjective experience; free will).
Things I considered that made the probability lower:
- The proposition under question could be false for two
different reasons:
- ontologically basic mental entities
- physical yet non-neurological parts of the mind's substrate
- I don't know how the mind works (nor does anyone else), so I should nudge my probability estimate away from certainty either way.
- The mind is indeed unique.
↑ comment by Morendil · 2010-04-21T18:13:10.281Z · LW(p) · GW(p)
Yvain, are you going to follow up on this now that you seem to have somewhat more time for participation here? ;)
Replies from: Yvain, SilasBarta↑ comment by Scott Alexander (Yvain) · 2010-04-28T13:51:05.503Z · LW(p) · GW(p)
Short answer: I had just read an article on a book called "The Root of Thought" which made it sound like it was making a very convincing case for a lot of higher thought being based in glial cells and not neurons.
It would have been fun and educational to get everyone to say they were 99.999% confident that thought was neural (which I would have done before reading the summary) and then spring the whole glial cell thing on them.
But I ended up not having time to read or even acquire the book, and no one really took the bait anyway. But yeah, "Root of Thought". If any of you have read it, please tell me what you think.
Replies from: Morendil↑ comment by Morendil · 2010-04-28T16:14:44.069Z · LW(p) · GW(p)
Thanks! I was starting to expect something like that, though in fact I've only recently become aware that the scientific consensus is shifting away from seeing glial cells as more or less just stuffing.
My mom is a neuroscientist and she mentioned that some time ago, I was planning to question her a little bit more about that topic. (Interestingly given her profession, she is vehemently skeptical that AI is at all possible, but that's a story for another day.)
↑ comment by SilasBarta · 2010-04-21T18:16:05.927Z · LW(p) · GW(p)
I'd be happy with just an answer to the clarification I've been asking for...
↑ comment by Paul Crowley (ciphergoth) · 2010-02-06T11:32:08.582Z · LW(p) · GW(p)
Time for the reveal on this one I think!
↑ comment by SilasBarta · 2010-02-03T23:47:06.518Z · LW(p) · GW(p)
Still waiting for you to clarify what this belief is supposed to be distinguished from...
↑ comment by ata · 2010-02-02T00:02:37.170Z · LW(p) · GW(p)
I'm going to say 98%, and not account for fun/sneakiness, because I don't know whether you're expecting people to underestimate or overestimate it. And because if the trick is in the wording of the proposition, I don't care enough to try to figure it out.
↑ comment by magfrump · 2010-02-01T18:41:48.988Z · LW(p) · GW(p)
Given a specific set of inputs/outputs (i.e. the virtual reality of existing as a human in a world with consistent computers, pencils and paper, teachers, students, etc.), and assuming that I intuitively understand what a "substrate of neurons" is, extremely certain (see ciphergoth).
Without a set of inputs or outputs, the question is a tree falling in a forest. A Turing machine doesn't perform computation if it doesn't have inputs.
↑ comment by FAWS · 2010-02-01T18:21:22.271Z · LW(p) · GW(p)
Discounting the indication of sneakiness, which looks like it would change the probability if properly taken into account: 65% I wouldn't out of hand dismiss the possibility that parts of the physical body other than neurons in the brain are involved. For instance I wouldn't be terribly surprised if sub-vocalization of verbal thoughts played an important (albeit probably not irreplaceable) role. Confidence that no such thing as a metaphysical soul is involved: 99.9%
↑ comment by AndyWood · 2010-02-01T18:07:07.926Z · LW(p) · GW(p)
I'm going to take your question in the simple sense that first occurs to me, which is something like "dualism is false, and mysterious quantum effects are unnecessary. ordinary molecular chemistry only."
In that case, my probability approaches 1 that ordinary molecular chemistry is 100% sufficient to describe a system that implements reasoning and creative thought, and - heck - experiences consciousness. However, I also think that explaining how will require abstractions that are probably not yet well-understood.
↑ comment by JamesAndrix · 2010-02-01T15:50:55.885Z · LW(p) · GW(p)
Depending on the terms, something near 100% or 0%.
I offload some of my mental functions to the internet. Does that count?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T08:30:11.953Z · LW(p) · GW(p)
A query to Unknown, with whom I have this bet going:
Unknown: When someone designs a superintelligent AI (it won't be Eliezer), without paying any attention to Friendliness (the first person who does it won't), and the world doesn't end (it won't), it will be interesting to hear Eliezer's excuses.
EY: Unknown, do you expect money to be worth anything to you in that situation? If so, I'll be happy to accept a $10 payment now in exchange for a $1000 inflation-adjusted payment in that scenario you describe.
I recently found within myself a tiny shred of anticipation-worry about actually surviving to pay off the bet. Suppose that the rampant superintelligence proceeds to take over its future light cone but, in the process of dissembling existing humans, stores their mind-state. Some billions of years later, the superintelligence runs across an alien civilization which succeeded on their version of the Friendly AI problem and is at least somewhat "friendly" in the ordinary sense, concerned about other sentient lives; and the superintelligence ransoms us to them in exchange for some amount of negentropy which outweighs our storage costs. The humans alive at the time are restored and live on, possibly having been rescued by the alien values of the Super Happy People or some such, but at least surviving.
In this event, who wins the bet?
Replies from: CannibalSmith, ciphergoth, pjeby, Unknowns↑ comment by CannibalSmith · 2010-02-01T09:53:51.273Z · LW(p) · GW(p)
SIAI: Utopia or hundred times your money back!
Eliezer, would you accept a bet $100 against $10000?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T10:35:48.838Z · LW(p) · GW(p)
On the same problem? I might attach some extra terms and conditions this time around, like "offer void (stakes will be returned) if the AI has the power and desire to use us for paperclips but our lives are ransomed by some other entity with the power to offer the AI more paperclips than it could produce by consuming us", "offer void if the explanation of the Fermi Paradox is a previously existing superintelligence which shuts down any new superintelligences produced", and "offer void if the AI consumes our physical bodies but we continue via the sort of weird anthropic scenario introduced in The Finale of the Ultimate Meta Mega Crossover." With those provisos, my probability drops off the bottom of the chart. I'm still not sure about the bet, though, because I want to keep my total of outstanding bets to something I can honor if they all simultaneously go wrong (no matter how surprising that would be to me), and this would use up $10,000 of that, even if it's on a sure thing - I might be able to get a better price on some other sure thing.
Replies from: Unknowns↑ comment by Unknowns · 2010-02-01T12:06:19.637Z · LW(p) · GW(p)
If we survive by an anthropic situation (it's hard to see how that could preserve several persons together, but just in case), then you win the bet, since that would more like a second world than a continuation of this one.
If the AI is shut down before it has had a chance to operate, the bet wouldn't have been settled yet, so you wouldn't have to pay anything.
Anyway, I'm still going to win.
↑ comment by Paul Crowley (ciphergoth) · 2010-02-01T08:42:23.162Z · LW(p) · GW(p)
You definitely win. If I say "you'll get killed doing that" and you are, I shan't expect to pay back my winnings when you're reanimated.
↑ comment by pjeby · 2010-02-01T20:17:35.908Z · LW(p) · GW(p)
Perhaps you've already defined "superintelligent" as meaning "self-directed, motivated, and recursively self-improving" rather than merely "able to provide answers to general questions faster and better than human beings"... but if you haven't, it seems to me that the latter definition of "superintelligent" would have a much higher probability of you losing the bet. (For example, a Hansonian "em" running on faster hardware and perhaps a few software upgrades might fit the latter definition.)
↑ comment by Unknowns · 2010-02-01T08:38:21.749Z · LW(p) · GW(p)
I think I would win the bet. It wouldn't exactly be "the end of the world", but just a very strange future of the world.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T08:41:42.748Z · LW(p) · GW(p)
Really? Huh. To me that seems both pretty world-endy and strongly against the spirit of what was implied by your original statement... would you predict this outcome? Is it something that your model allows to happen? I know it's not something I would feel compelled to make excuses for - more like "I TOLD YOU SO!"
What exactly do you think happens in the scenario described?
Replies from: Unknowns↑ comment by Unknowns · 2010-02-01T10:04:01.157Z · LW(p) · GW(p)
Ok, if you're sufficiently worried about the possibility of that outcome, I'll be happy to grant it to your side of the bet... even though at the time, it seemed to me clear that your assertion that the world would end meant that we wouldn't continue as conscious beings.
I definitely wouldn't predict that outcome. I would be very surprised, since I think the world will continue in the usual way. But is it really that likely even on your model?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T10:32:25.494Z · LW(p) · GW(p)
But is it really that likely even on your model?
It's part of a larger class of scenarios where "AI has the power and desire to kill us with a fingersnap, but our lives are ransomed by someone else with the ability to make paperclips".
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-13T06:08:47.871Z · LW(p) · GW(p)
This is actually a damned good question:
http://www.scientificblogging.com/mark_changizi/why_doesn%E2%80%99t_size_matter%E2%80%A6_brain
comment by Paul Crowley (ciphergoth) · 2010-02-01T08:32:35.177Z · LW(p) · GW(p)
To re-iterate a request from Normal Cryonics: I'm looking for links to the best writing out there against cryonics, especially anything that addresses the plausibility of reanimation, the more detailed the better.
I'm not looking for new arguments in comments, just links to what's already "out there". If you think you have a good argument against cryonics that hasn't already been well presented, please put it online somewhere and link to it here.
Replies from: AlanCrowe↑ comment by AlanCrowe · 2010-02-02T19:03:10.404Z · LW(p) · GW(p)
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-02T22:15:08.797Z · LW(p) · GW(p)
Thanks.
comment by rolf_nelson · 2010-02-01T08:02:28.766Z · LW(p) · GW(p)
I've created a rebuttal to komponisto's misleading Amanda Knox post, but don't have enough karma to create my own top-level post. For now, I've just put it here:
http://docs.google.com/View?id=dgb3jmh2_5hj95vzgk
Replies from: komponisto, wedrifid, Vive-ut-Vivas↑ comment by komponisto · 2010-02-01T11:56:48.317Z · LW(p) · GW(p)
If you actually want to debate this, we could do so in the comments section of my post, or alternatively over in the Richard Dawkins forum.
(Though since you say "my intent is merely to debunk komponisto's post rather than establish Amanda's guilt", I'm suspicious. See Against Devil's Advocacy.)
Make sure you've read my comments here in addition to my post itself.
There is one thing I agree with you about, and that is that this statement of mine
these two things constituting so far as I know the entirety of the physical "evidence" against the couple
is misleading. The misleading part is the phrase "so far as I know", which has been interpreted by people who evidently did not read my preceding survey post to mean that I had not heard about all the other alleged physical evidence. I didn't consider this interpretation because I was assuming that my readers had read both True Justice and Friends of Amanda, knew from my previous post that I had obviously read them both myself, and would understand my statement for what it was -- a dismissal of the rest of the so-called "evidence". However, in retrospect, I should have foreseen this misunderstanding, so I've now edited the sentence to read:
these two things constituting pretty much the entirety of the physical "evidence" against the couple.
ETA: At least one person has upvoted the parent without also upvoting this comment, which I interpret as an endorsement of Rolf Nelson's essay. I find this baffling. Almost every one Nelson's points (autopsy report, luminol prints, staged break-in, alleged cleanup...) was extensively discussed in comments at the time. The only one that wasn't (a supposed handprint of Knox's on a pillow in Kercher's room) is an outright falsehood -- as you will see from following Nelson's link, it's not even (close to) what that article claims. Furthermore, Nelson criticizes me for "accept[ing] propaganda from the Friends of Amanda (FoA) at face value" while citing True Justice for an "Introduction to Logic 101".
I challenge anyone who thinks that this represents a serious challenge to my post to come out and identify themselves.
Replies from: Jack, rolf_nelson↑ comment by Jack · 2010-02-01T23:25:12.620Z · LW(p) · GW(p)
It is pretty clear to me that Devil's Advocacy is valuable for precisely the reasons in the link Eliezer added at the end of the post (Brandon). I'm not sure we should, therefore, be automatically linking to the piece in response to instances of Devil's advocacy until and unless someone writes a complementary post rebutting Brandon's.
Replies from: wedrifid↑ comment by wedrifid · 2010-02-02T06:03:49.125Z · LW(p) · GW(p)
It is pretty clear to me that Devil's Advocacy is valuable for precisely the reasons in the link Eliezer added at the end of the post (Brandon). I'm not sure we should, therefore, be automatically linking to the piece in response to instances of Devil's advocacy until and unless someone writes a complementary post rebutting Brandon's.
The only reason I haven't posted my draft "Against Devil's Advocacy" is that someone beat me to the punch and I didn't want to make a redundant post. I endorce links to 'Against Devil's Advocacy' precisely because it is an important subset of 'Advocacy' with all the related problems.
↑ comment by rolf_nelson · 2010-02-01T22:00:23.682Z · LW(p) · GW(p)
(a supposed handprint of Knox's on a pillow in Kercher's room) is an outright falsehood -- as you will see from following Nelson's link, it's not even (close to) what that article claims.
Did you misread the source?
I said:
"One of Amanda's bloody footprints was found inside the murder room, on a pillow hidden under Meredith's body."
The source I cited (http://abcnews.go.com/TheLaw/International/story?id=7538538&page=2) said:
"Guede's bloody shoeprint was also positively identified on a pillow found under the victim's body... Police also found the trace of a smaller shoe print on the pillow compatible with shoe sizes 6–8. The print did not, however, match any of the shoes belonging to Knox or Kercher that were found in the house. Knox wears a size 7, Rinaldi said."
Anyway, a debate sounds like a fun use of free time; I replied to the comment you indicated: http://lesswrong.com/lw/1j7/the_amanda_knox_test_how_an_hour_on_the_internet/1gdo
↑ comment by wedrifid · 2010-02-02T06:07:33.913Z · LW(p) · GW(p)
I've created a rebuttal to komponisto's misleading Amanda Knox post, but don't have enough karma to create my own top-level post. For now, I've just put it here:
I don't understand how this was worked around. It looks like (rolf's karma + karma lost by this being posted at the top level) would still have been insufficient.
The karma limit was serving the purpose for which it was intended. If, for some reason, an exception was granted I would like to see this announced.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-02T11:18:13.968Z · LW(p) · GW(p)
Rolf is a major SIAI donor/supporter, so draw your own conclusion there.
Here's a bunch of mine, for fun:
- money > karma
- control the physical layer
- beware the other kind of status
- money is the unit of caring; karma is just a number
Seriously, I've had some interesting discussions with Rolf in the past elsewhere. I'm not sure why he doesn't participate much here, and why he chose this topic to put his efforts into. But maybe we can cut him some slack?
Replies from: wedrifid, ciphergoth, ciphergoth↑ comment by wedrifid · 2010-02-02T12:31:05.373Z · LW(p) · GW(p)
But maybe we can cut him some slack?
Rolf isn't the one we'd be cutting slack to here. It is the moderator's decision to circumvent the karma system to post a political rant that warrants scrutiny.
Eliezer has been quite adamant that this is not the blog of the SIAI. In that context and elsewhere the moderation process has been held to high standards of consistency and transparency. At least acknowledging that special allowances were made (and who made them) would be nice.
I expect the moderator has already learned their lesson. Posting Rolf's rant seems to have allowed him to embarrass himself and can only be expected to have the opposite effect to the one intended. The ~50 karma limit gives people a chance to read posts like this and better calibrate his posting to the social environment before he puts his foot in his mouth.
PS: Can anyone remember what the post was called in which Eliezer describes a scenario about deducing the bias of a coin? A motivated speaker gives only a subset of a stream of coin tosses... I couldn't remember the title.
Replies from: pengvado↑ comment by Paul Crowley (ciphergoth) · 2010-02-02T11:29:30.243Z · LW(p) · GW(p)
I had thought he was here solely to discuss this one thing. If he's interested in the things we're interested in in general as evinced powerfully by those donations then yes, I'll increase the slack I cut. Thanks.
↑ comment by Paul Crowley (ciphergoth) · 2010-02-02T12:45:38.374Z · LW(p) · GW(p)
BTW I think it's pretty unlikely that anyone at SIAI has used admin privs to allow Rolf to make a top-level post he wouldn't otherwise have been able to make.
↑ comment by Vive-ut-Vivas · 2010-02-01T16:53:26.675Z · LW(p) · GW(p)
Criticizing komponisto for citing "Friends of Amanda Knox" while you yourself cite "True Justice" causes those criticisms to fall flat.
Unfortunately, I find that your essay is wading into Dark Arts territory, since its intent is to show that komponisto's original essay was "misleading", and that that would somehow give veracity to arguments of Amanda Knox's guilt. Using that same logic, one would have to consider the implications of the chief prosecutor in Amanda Knox's case being convicted of abuse of office in another murder trial.
However, I would be interested in seeing komponisto and rolf nelson discuss the actual details of the case; in particular, the points that rolf nelson brought up in the essay.
Replies from: rolf_nelson, rolf_nelson↑ comment by rolf_nelson · 2010-02-01T19:50:32.973Z · LW(p) · GW(p)
Re: dark arts territory, I agree completely. This criticism should be directed more strongly to komponisto. My intent here is merely to repair some of the Bayesian damage caused by komponisto's original post. Perhaps this will dissuade people from wandering into dark arts territory in the future, or at least to wander in with misleading claims.
Replies from: Vive-ut-Vivas↑ comment by Vive-ut-Vivas · 2010-02-01T21:52:31.108Z · LW(p) · GW(p)
My intent here is merely to repair some of the Bayesian damage caused by komponisto's original post.
I hardly think komponisto inflicted "Bayesian damage" on the members of Less Wrong, seeing as they had already overwhelmingly come to the conclusion that Amanda Knox was not guilty before he had even presented his own arguments.
↑ comment by rolf_nelson · 2010-02-01T19:47:48.548Z · LW(p) · GW(p)
I said once in the doc that 'truejustice claims that X'. Because I said 'truejustice claims that X' rather than just stating X as though it were uncontested fact, and because X is basically correct, I claim that my doc is not misleading. If X is untrue, that would be a different story. In other words, if komponisto cited FoA and FoA's claims were true, I would not accuse him of being misleading.
comment by whpearson · 2010-02-14T19:11:14.383Z · LW(p) · GW(p)
We are status oriented creatures especially with regard to social activities. Science is one of those social activities, so it is to be expected that science is infected with status seeking. However it is also one of the more efficient ways we have of getting truths, so it must be doing some things correctly. I think that it may have some ideas that surround it that reduce the problems of it being a social enterprise.
One of the problems is the social stigma of being wrong, which most people on the edge of knowledge probably are. Being wrong does not signal your attractive qualities, people don't like other people that tell them lies of give them false information. I suspect that falsifiability is popular among scientists because it allows them to pre-commit to changing their minds, without taking too high a status hit. This is a bit stronger than leaving a line of retreat as it says when you'll retreat as well as allowing you to and is a public admission. They can say that they currently believe idea X but if experiment Y shows Z they will abandon X. That statement is also useful for other people as well as it allows them to see the boundaries of the idea.
This can also be seen as working to oppose of the confirmation bias. If you think you are right, there is no reason to look for data that tests your assumptions. If you want to pre-commit to changing your mind, you need to think how your idea might be wrong and are allowed to look for data.
I would like to see this community adopting this approach
In the spirit of this: I would cease advocate this approach if it was shown that people that pre-committed to changing their minds suffered as large a status hit as those that didn’t, when it was shown that they were wrong.
Replies from: tut↑ comment by tut · 2010-02-14T19:14:55.156Z · LW(p) · GW(p)
Upvoted. Although I am curious as to how you will measure the status hits that various people take from being wrong.
Replies from: whpearson↑ comment by whpearson · 2010-02-14T19:30:42.602Z · LW(p) · GW(p)
I'd assumed there was standard ways of measuring it along the lines of a typical psychology experiment: involve two groups of people in two different scenarios (wrong, and wrong with retreat). Then quiz the audience on their opinion of the person, their intelligence, work with them,whether you would trust them to perform their area of expertise, be their friend, etc.
However I can't find much with a bit of googling. I'll have a look into it later.
Replies from: tut↑ comment by tut · 2010-02-14T19:42:39.787Z · LW(p) · GW(p)
Thanks. That sounds good, but it is an experimental program, not something you'd observe on Less Wrong.
I expect that you could get more complex results than yes or no. Like with some primes or some observers preparing a retreat would help, with others it wouldn't, and in some contexts you'd lose status and credibility directly for trying to prepare a retreat.
Replies from: whpearson↑ comment by whpearson · 2010-02-15T13:34:48.917Z · LW(p) · GW(p)
True. We are interested in communities where truth-tracking is high status, so that cuts down the number of contexts. We would also probably need to evaluate it against other ways of coping with being incorrect (disassociation e.g. Eliezer(1999), apology etc) and see whether it is a good strategy on average.
comment by byrnema · 2010-02-13T17:15:13.745Z · LW(p) · GW(p)
I seem to be entering a new stage in my 'study of Less Wrong beliefs' where I feel like I've identified and assimilated a large fraction of them, but am beginning to notice a collusion of contradictions. This isn't so surprising, since Less Wrong is the grouped beliefs of many different people, and it's each person's job to find their own self-consistent ribbon.
But just to check one of these -- Omega's accurate prediction of your choice in the Newcomb problem, which assumes determinism, is actually impossible, right?
You can get around the universe being non-deterministic because of quantum mechanical considerations using the many worlds hypothesis: all symmetric possible 'quark' choices are made, and the universe evolves all of these as branching realities. If your choice to one-box or two-box is dependent upon some random factors, then Omega can't predict what will happen because when he makes the prediction, he is up-branch of you. He doesn't know which branch you'll be in. Or, more accurately, he won't be able to make a prediction that is true for all the branches.
Replies from: Eliezer_Yudkowsky, orthonormal, jimrandomh, byrnema, Alicorn, ciphergoth, Vladimir_Nesov, tut↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-13T18:55:29.178Z · LW(p) · GW(p)
So long as you make your Newcomb's choice for what seem like good reasons rather than by flipping a quantum coin, it is likely that very many of you will pick the same good reasons, and that Omega can easily achieve 99% or higher accuracy. I would expect almost no Eliezer Yudkowskys to two-box - if Robin Hanson is right about mangled worlds and there's a cutoff for worlds of very small amplitude, possibly none of me. Remember, quantum branching does not correspond to high-level decisionmaking.
Replies from: byrnema, gregconen↑ comment by byrnema · 2010-02-13T19:03:47.698Z · LW(p) · GW(p)
Yes, most Eliezer Yudkowskys will 1-box. And most byrnemas too. But the new twist (new for me, anyway) is that the Eliezer's that two-box are the ones that really win, as rare as they are.
Replies from: Eliezer_Yudkowsky, gregconen↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-13T19:22:04.124Z · LW(p) · GW(p)
The one who wins or loses is the one who makes the decision. You might as well say that if someone buys a quantum lottery ticket, the one who really wins is the future self who wins the lottery a few days later; but actually, the one who buys the lottery ticket loses.
↑ comment by gregconen · 2010-02-13T19:14:31.150Z · LW(p) · GW(p)
The slight quantum chance that EY will 2-box causes the sum of EYs to lose, relative to a perfect 1-boxer, assuming Omega correctly predicts that chance and randomly fills boxes accordingly. The precise Everett branches where EY 2-boxes and where EY loses are generally different, but the higher the probability that he 1-boxes, the higher his expected value is.
Replies from: byrnema↑ comment by gregconen · 2010-02-13T19:02:31.077Z · LW(p) · GW(p)
Interestingly, I worked through the math once to see if you could improve on committed 1-boxing by using a strategy of quantum randomness. Assuming Omega fills the boxes such that P(box A has $)=P(1-box), P(1-box)=1 is the optimal solution.
Replies from: byrnema↑ comment by byrnema · 2010-02-13T20:32:04.276Z · LW(p) · GW(p)
Interesting. I was idly wondering about that. Along somewhat different lines:
I've decided that I am a one-boxer,and I will one box. With the following caveat: at the moment of decision, I will look for an anomaly with virtual zero probability. A star streaks across the sky and fuses with another one. Someone spills a glass of milk and halfway towards the ground, the milk rises up and fills itself back into the glass. If this happens, I will 2-box.
Winning the extra amount in this way in a handful of worlds won't do anything to my average winnings-- it won't even increase it by epsilon. However, it could make a difference if something really important is at stake, where I would want to secure the chance that it happens one time in the whole universe.
Replies from: byrnema, Nick_Tarleton↑ comment by byrnema · 2010-02-13T21:11:37.268Z · LW(p) · GW(p)
Why is this comment being down-voted? I thought it was rather clever to use Omega's one weak spot -- quantum uncertainty -- to optimize your winnings even over a set with measure zero.
Replies from: MrHen, Jack↑ comment by MrHen · 2010-02-15T15:56:06.011Z · LW(p) · GW(p)
Because Omega is going to know what triggers you would use for anomalies. A star streaking across the sky is easy to see coming if you know the current state of the universe. As such, Omega would know you are about to two-box even though you are currently planning to one-box.
When the star streaks across the sky, you think, "Ohmigosh! It happened! I'm about to get rich!" Then you open the boxes and get $1000.
Essentially, it boils down to this: If you can predict a scenario where you will two-box instead of one-box than Omega can as well.
The idea of flipping quantum coins is more fool proof. The idea of stars streaking or milk unspilling is only hard for us to see coming. Not to mention it will probably trigger all sorts of biases when you start looking for ways to cheat the system.
Note: I am not up to speed on quantum mechanics. I could be off on a few things here.
Replies from: byrnema↑ comment by byrnema · 2010-02-15T16:17:20.186Z · LW(p) · GW(p)
OK, right: looking for a merging of stars would be a terrible anomaly to use because that's probably classical mechanics and Omega-predictable. The milk unspilling would still be a good example, because Omega can't see it coming either. (He can accurately predict that I will two-box in this case, but he can't predict that the milk will unspill.)
I would have to be very careful that the anomaly I use is really not predictable. For example, I screwed up with the streaking star. I was already reluctant to trust flipping quantum coins, whatever those are. They would need to be flipped or simulated by some mechanical device and may have all kinds of systematic biases and impracticalities if you are actually trying to flip 10^23^23 coins.
Without having plenty of time to think about it, and say, some physicists advising me, it would probably be wise for me to just one-box.
↑ comment by Jack · 2010-02-13T21:31:24.751Z · LW(p) · GW(p)
I didn't down vote but I confess I don't really know what you're talking about in that comment. Why would you two box in that case? What really important thing is at stake? I don't get it.
Replies from: byrnema↑ comment by byrnema · 2010-02-13T21:52:17.083Z · LW(p) · GW(p)
OK. The way I've understood the problem with Omega is that Omega is a perfect predictor so you have 2 options and 2 outcomes:
you two box --> you get $2,000 ($1000 in each box)
you one box --> you get 1M ($1M in one box, $1000 in the second box)
If Omega is not a perfect predictor, it's possible that you two box and you get 1,001,000. (Omega incorrectly predicted you'd one box.)
However, if you are likely to 2box using this reasoning, Omega will adjust his prediction accordingly (and will even reduce your winnings when you do 1box -- so that you can't beat him).
My solution was to 1box almost always -- so that Omega predicts you will one box, but then 'cheat' and 2-box almost never (but sometimes). According to Greg, your 'sometimes' has to be over a set of measure 0, any larger than that and you'll be penalized due to Omega's arithmetic.
What really important thing is at stake?
Nothing -- if only an extra thousand is at stake, I probably wouldn't even bother with my quantum caveat. One million dollars would be great anyway. But I can imagine an unfriendly Omega giving me choices where I would really want to have both boxes maximally filled ... and then I'll have to realize (rationally) that I must almost always 1 box, but I can get away with 2-boxing a handful of times. The problem with a handful, is that how does a subjective observer choose something so rarely? They must identify an appropriately rare quantum event.
Replies from: Jack↑ comment by Jack · 2010-02-13T22:13:40.604Z · LW(p) · GW(p)
So this job could even be accomplished by flipping a quantum coin 10000 times and only two-boxing when they come up tails each time. You're just looking for a decision mechanism that only applies in a handful of branches.
Replies from: byrnema↑ comment by byrnema · 2010-02-13T22:30:57.451Z · LW(p) · GW(p)
Yes, exactly.
Replies from: gregconen↑ comment by gregconen · 2010-02-14T00:01:05.947Z · LW(p) · GW(p)
The math is actually quite straight-forward, if anyone cares to see it. Consider a generalized Newcomb's problem. Box A either contains $A or nothing, while box B contains $B (obviously A>B, or there is no actual problem). Let Pb the probability that you 1-box. Let Po be the probability that Omega fills box A (note that only quantum randomness counts, here. If you decide by a "random" but deterministic process, Omega knows how it turns out, even if you don't, so Pb=0 or 1). Let F be your expected return.
Regardless of what Omega does, you collect the contents of box A, and have a (1-Pb) probability of collecting the contents of box B. F(Po=1)= A + (1-Pb)B
F(Po=0)=(1-Pb)B
For the non-degenerate cases, these add together as expected. F(Po, Pb) = Po(A + (1-Pb)B) + (1-Po)[(1-Pb)B]
Suppose Po = Pb := P
F(P) = P(A + (1-P)B) + [(1-P)^2] B
=P(A + B - PB) + (1-2P+P^2) B
=PA + PB - (P^2)B + B - 2PB + (P^2)B
=PA + PB + B - 2PB
=B + P(A-B)
If A > B, F(P) is monotonically increasing, so P = 1 is the gives maximum return. If A<B, P=0 is the maximum (I hope it's obvious to everyone that if box B has MORE money than a full box A, 2-boxing is ideal).
Replies from: Jordan↑ comment by Jordan · 2010-02-14T01:17:51.734Z · LW(p) · GW(p)
I'm not sure why you take Po = Pb. If Omega is trying to maximize his chance of predicting correctly then he'll take Po = 1 if Pb > 1/2 and Pb = 0 if Pb < 1/2. Then, assuming A > B / 2, the optimal choice is Po = 1/2.
Actually, if Omega behaves this way there is a jump discontinuity in expected value at Po=1/2. We can move the optimum away from the discontinuity by postulating there is some degree of imprecision in our ability to choose a quantum coin with the desired characteristic. Maybe when we try to pick a coin with bias Po we end up with a coin with bias Po+e, where e is an error chosen from a uniform distribution over [Po-E, Po+E]. The optimal choice of Po is now 1/2 + 2E, assuming A > 2EB, which is the case for sufficiently small E (E < 1/4 suffices). The expected payoff is now robust (continuous) to small perturbations in our choice of Po.
Replies from: gregconen↑ comment by gregconen · 2010-02-14T01:41:39.055Z · LW(p) · GW(p)
A good point.
Your solution does have Omega maximize right answers. My solution works if Omega wants the "correct" result summed over all Everett branches: for every you that 2-boxes, there exists an empty box A, even if it doesn't usually go to the 2-boxer.
Both answers are correct, but for different problems. The "classical" Newcomb's problem is unphysical, just as byrnema initially described. A "Quantum Newcomb's problem" requires specifying how Omega deals with quantum uncertainty.
Replies from: Jordan↑ comment by Jordan · 2010-02-14T02:02:25.114Z · LW(p) · GW(p)
Interesting. Since the spirit of Newcomb's problem depends on 1-boxing have a higher payoff, I think it makes sense to additionally postulate your solution to quantum uncertainty, as it maintains the same maximizer. That's so even if the Everett interpretation of QM is wrong.
↑ comment by Nick_Tarleton · 2010-02-13T23:16:31.074Z · LW(p) · GW(p)
Let p be the probability that you 2-box, and suppose (as Greg said) that Omega lets P(box A empty) = p with its decision being independent of yours. It sounds like you're saying you only care about the frequency with which you get the maximal reward. This is P(you 2-box)*P(box A full) = p(1-p) which is maximized by p=0.5, not by p infinitesimally small.
↑ comment by orthonormal · 2010-02-13T19:58:04.779Z · LW(p) · GW(p)
I think Omega's capabilities serve a LCPW function in thought experiments; it makes the possibilities simpler to consider than a more physically plausible setup might.
Also, I'd say that our wetware brains probably aren't close to deterministic in how we decide (though it would take knowledge far beyond what we currently have to be sure of this), but e.g. an uploaded brain running on a classical computer would be perfectly (in principle) predictable.
↑ comment by jimrandomh · 2010-02-13T17:34:47.003Z · LW(p) · GW(p)
If your choice to one-box or two-box is dependent upon some random factors, then Omega can't predict what will happen because when he makes the prediction, he is up-branch of you. He doesn't know which branch you'll be in.
What Omega can do instead is simulate every branch and count the number of branches in which you two-box, to get a probability, and treat you as a two-boxer if this probability is greater than some threshold. This covers both the cases where you roll a die, and the cases where your decision depends on events in your brain that don't always go the same way. In fact, Omega doesn't even need to simulate every branch; a moderate sized sample would be good enough for the rules of Newcomb's problem to work as they're supposed to.
But the real reason for treating Omega as a perfect predictor is that one of the more natural ways of modeling an imperfect predictor is to decompose it into some probability of being a perfect predictor and some probability of its prediction being completely independent of your choice, the probabilities depending on how good a predictor you think it really is. In that context, denying the possibility that a perfect predictor could exist is decidedly unhelpful.
↑ comment by Alicorn · 2010-02-13T17:29:42.870Z · LW(p) · GW(p)
I'm sufficiently uninformed on how quantum mechanics would interact with determinism that so far I've been operating under the assumption that it doesn't. Maybe someone here can enlighten me? Does the behavior of things-that-behave-quantumly typically affect macro-level events, or is this restricted to when you look at them and record experimental data as a direct causal result of the behavior? Is there some way to prove that quantum events are random, as opposed to caused deterministically by something we just haven't found? (I'm not sure even in principle how you could prove that something is random. It'd be proving the negative on the existence of causation for a possibly-hidden cause.)
Replies from: orthonormal, tut, Jack↑ comment by orthonormal · 2010-02-13T20:02:59.812Z · LW(p) · GW(p)
Does the behavior of things-that-behave-quantumly typically affect macro-level events, or is this restricted to when you look at them and record experimental data as a direct causal result of the behavior?
Yes; since many important macroscopic events (e.g. weather, we're quite sure) are extremely sensitive to initial conditions, two Everett branches that differ only by a single small quantum event can quickly diverge in macroscopic behavior.
↑ comment by tut · 2010-02-13T17:37:07.183Z · LW(p) · GW(p)
Does the behavior of things-that-behave-quantumly typically affect macro-level events...?
Yes. They only appear weird if you look at small enough scales, but classical electrons would not have stable orbits, so without quantum effects there'd be no stable atoms.
Is there some way to prove that quantum events are random, as opposed to caused deterministically by something we just haven't found?
No, but there is evidence. There is a proof that if they were caused by something unknown but deterministic (or if there even was a classical probability function for certain events) then they would follow Bell's inequalities. But that appears not to be the case.
Replies from: byrnema, wnoise, Alicorn↑ comment by byrnema · 2010-02-13T17:43:49.358Z · LW(p) · GW(p)
But this is where things get really shaky for materialism. If something cannot be explained in X, this means there is something outside X that determines it.
Materialists must hope that in spite of Bell's inequalities, there is some kind of non-random mechanism that would explain quantum events, regardless of whether it is possible for us to deduce it.
Alicorn asked above:
I'm not sure even in principle how you could prove that something is random.
In principle, you can't. And one of the foundational (but non-obvious) assumptions of materialism is that nothing is truly random. The non-refutibility of materialism depends upon never being able to demonstrate that something is actually random.
Later edit: I realize that this comment is somewhat of a non-sequitur in the context of this thread. (oops) I'll explain that these kinds of questions have been my motivation for thinking about Newcomb in the first place. Sometimes I'm worried about whether materialism is self-consistent, sometimes I'm worried about whether dualism is a coherent idea within the context of materialism, and these questions are often conflated in my mind as a single project.
Replies from: tut↑ comment by tut · 2010-02-13T19:24:22.728Z · LW(p) · GW(p)
And one of the foundational (but non-obvious) assumptions of materialism is that nothing is truly random.
In that case I am not a materialist. I don't believe in any entities that materialists don't believe in, but I do believe that you have to resort to Many Worlds in order to be right and believe in determinism. Questions that amount to asking "which Everett branch are we in" can have nondeterministic answers.
Replies from: byrnema, CarlShulman↑ comment by byrnema · 2010-02-13T19:53:44.948Z · LW(p) · GW(p)
No worries -- you can still be a materialist. Many worlds is the materialist solution to the problem of random collapse. (But I think that's what you just wrote -- sorry if I misunderstood something.)
Suppose that a particle has a perfectly undetermined choice to go left or go right. If the particle goes left, a materialist must hold in principle that there is a mechanism that determined the direction, but then they can't say the direction was undetermined.
Many worlds says that both directions were chosen, and you happen to find yourself in the one where the particle went left. So there is no problem with something outside the system swooping down and making an arbitrary decision.
↑ comment by CarlShulman · 2010-02-13T19:58:06.227Z · LW(p) · GW(p)
Those sorts of question can arise in non-QM contexts too.
↑ comment by Alicorn · 2010-02-13T17:42:42.595Z · LW(p) · GW(p)
What are Bell's inequalities, and why do quantumly-behaving things with deterministic causes have to follow them?
Replies from: MBlume, byrnema, Eliezer_Yudkowsky, tut, CronoDAS↑ comment by byrnema · 2010-02-13T18:21:35.846Z · LW(p) · GW(p)
The EPR paradox (Einstein-Podolsky-Rosen paradox) is a set of experiments that suggest 'spooky action at a distance' because particles appear to share information instantaneously, at a distance, long after an interaction between them.
People applying "common sense" would like to argue that there is some way that the information is being shared -- some hidden variable that collects and shares the information between them.
Bell's Inequality only assumes there there is some such hidden variable operating locally* -- with no specifications of any kind on how it works -- and deduces correlations between particles sharing information that is in contradiction with experiments.
* that is, mechanically rather than 'magically' at a distance
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-15T09:06:04.101Z · LW(p) · GW(p)
Um... am I missing something or did no one link to, ahem:
http://lesswrong.com/lw/q1/bells_theorem_no_epr_reality/
Replies from: Alicorn↑ comment by tut · 2010-02-13T18:26:55.737Z · LW(p) · GW(p)
Well, actually everything has to follow them because of Bell's Theorem.
Edit: The second link should be to this explanation, which is somewhat less funny, but actually explains the experiments that violate the theorem. Sorry that I took so long, but it appeared that the server was down when I first tried to fix it, so I went and did other things for half an hour.
↑ comment by Jack · 2010-02-13T21:28:53.783Z · LW(p) · GW(p)
There is no special line where events become macro-level events. It's not like you get to 10 atoms or a mole and suddenly everything is deterministic again. You're position right now is subject to indeterminacy. It just happens that you're big enough that the chances every particle of your body moves together in the same, noticeable direction is very very small (and by very small I mean that I can confidently predict it will never happen).
In principle our best physics tells us that determinism is just false as a metaphysics. Other people have answered the question you meant to ask which is whether the extreme indeterminacies of very small particles can effect the actions of much larger collections of particles.
Replies from: orthonormal↑ comment by orthonormal · 2010-02-13T21:38:51.481Z · LW(p) · GW(p)
IAWYC except, of course, for this:
In principle our best physics tells us that determinism is just false as a metaphysics.
As said above and elsewhere, MWI is perfectly deterministic. It's just that there is no single fact of the matter as to which outcome you will observe from within it, because there's not just one time-descendant of you.
Replies from: Jack↑ comment by Jack · 2010-02-13T22:07:20.786Z · LW(p) · GW(p)
Thats a fair point, but I don't think it is quite that easy. On one formulation a deterministic system is a system whose end conditions are set by the rules of the system and the starting conditions. Under this definition, MWI is deterministic. But often what we mean by determinism is that it is not the case that the world could have been otherwise. For one extension of 'world' that is true. But for another extension, the world not only could have been otherwise. It is otherwise. There are also a lot of confusions about our use of indexicals here: what we're referring to with "I", "You", "This", "That" My" etc. Determinism usually implies that ever true statement (including true statements with indexicals) is necessarily true. But it isn't obvious to me that many worlds gives us that. Also, a common thought experiment to glean people's intuitions about determinism is basically to say that we live in a universe where a super computer that can exactly predict the future is possible. MWI doesn't allow for that.
Perhaps we shouldn't try to fit our square-pegged physics into the round holes of traditional philosophical concepts. But I take your point.
Replies from: pengvado↑ comment by pengvado · 2010-02-14T02:13:11.150Z · LW(p) · GW(p)
Why would determinism have anything to say about indexicals? There aren't any Turing-complete models that forbid indexical uncertainty; you can always copy a program and put the copies in different environments. So I don't see what use such a concept of "determinism" would have.
Replies from: Jack↑ comment by Jack · 2010-02-14T03:55:26.859Z · LW(p) · GW(p)
Thinking about this it isn't a concern about indexicals but a concern about reference in general. When we refer to an object we're not referring to it's extension throughout all Everett branches but we're also referring to an object extended in time. So take a sentence like "The table moved from the center of the room to the corner." If determinism is true we usually think that all sentences like this are necessary truths and sentences like "The table could have stayed in the center" are false. But I'm not sure what the right way to evaluate these sentences is given MWI.
Replies from: Jack↑ comment by Paul Crowley (ciphergoth) · 2010-02-13T17:27:25.678Z · LW(p) · GW(p)
Perfection is impossible, but a very, very accurate prediction might be possible.
↑ comment by Vladimir_Nesov · 2010-02-13T18:58:24.544Z · LW(p) · GW(p)
The world is deterministic at least to the extent that everything knowable is determined (but not necessarily the other way around). This is why you need determinism in the world in order to be able to make decisions (and can't use something not being determined as a reason for the possibility of making decisions).
comment by magfrump · 2010-02-01T19:00:07.681Z · LW(p) · GW(p)
According to some people we here at less wrong are good at determining the truth. Other people are notoriously not.
I don't know that Less Wrong is the appropriate venue for this, but I have felt for some time that I trust the truth-seeking capability here and that it could be used for something more productive than arguments about meta-ethics (no offense to the meta-ethicists intended). I also realize that people are fairly supportive of SIAI here in terms of giving spare cash away, but I feel like the community would be a good jumping-off point for a polling organization.
So I guess this leads to a few questions:
-Is anyone at LW currently involved with a polling firm?
-Is anyone (else) at LW interested in doing polls?
-Is LW an appropriate place to create a truth-seeking business, such as a pollster or a sponsor for studies?
None of these questions are immediate since I am a broke undergrad rather than an entrepreneur.
Replies from: mattnewport, arbimote, Jack↑ comment by mattnewport · 2010-02-01T19:29:13.573Z · LW(p) · GW(p)
I'm not sure I understand the connection between truth-seeking and polling, unless the specific truth you seek is simply the percentage of people who give a particular answer to a poll. Are you simply talking about a more accurate polling company or using polling to find other truths?
Replies from: JamesAndrix, magfrump↑ comment by JamesAndrix · 2010-02-01T19:43:35.042Z · LW(p) · GW(p)
All that, and how does it make money?
Replies from: ideclarecrockerrules↑ comment by ideclarecrockerrules · 2010-02-04T23:24:36.034Z · LW(p) · GW(p)
Possibly related: I have a bet going with a reddit-acquaintance; basically, I gave him an upvote, and if x turns out to be true, he donates $1000 to SIAI.
If members of this community have an accurate, well calibrated map, making bets could be a cost-effective way to pump money into SIAI or other non-profits/charities (which signals caring as well as integrity).
Is such a thing in the realm of Dark Arts?
↑ comment by magfrump · 2010-02-01T23:07:11.454Z · LW(p) · GW(p)
Yes, a more accurate polling company; potentially polling on alternative subjects, I also had scientific studies (grant-writing, peer-reviewing) in mind but I have even less idea how that works and how to express what I would actually think about it.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-01T23:24:35.482Z · LW(p) · GW(p)
The two examples you linked of bad polling seem to be examples of polling fraud rather than incompetence. It is not that these companies did not understand how to conduct an accurate poll, rather that they don't appear to have been motivated to do so.
It seems to me that accurate polling is quite a well understood problem. Legitimate polling companies exist that are reasonably good at it. In many cases I don't think there is much value (from a truth seeking perspective) in the poll data but I think it generally answers the question "what percentage of people give answer Y to question X?" fairly well. That's just not a very useful piece of data in many cases.
↑ comment by arbimote · 2010-02-02T06:17:32.308Z · LW(p) · GW(p)
Here's an idea for how a LW-based commercial polling website could operate. Basically it is a variation on PredictionBook with a business model similar to TopCoder.
The website has business clients, and a large number of "forecasters" who have accounts on the website. Clients pay to have their questions added to the website, and forecasters give their probability estimates for whichever questions they like. Once the answer to a question has been verified, each forecaster is financially rewarded using some proper scoring rule. The more money assigned to a question, the higher the incentive for a forecaster to have good discrimination and calibration. Some clever software would also be needed to combine and summarize data in a way that is useful to clients.
The main advantage of this over other prediction markets is that the scoring rule encourages forecasters to give accurate probability estimates.
↑ comment by Jack · 2010-02-02T00:21:50.308Z · LW(p) · GW(p)
I'm not interested, in particular, with polling but I'm interested in it insofar as it is a way of getting data that people don't otherwise have and thus improving our predictions. That said, LW totally is the place to create a truth-seeking business and as another broke undergrad (and putting off grad school) if you, anyone else and I can come up with a profitable venture that involves employing truth seeking I definitely want in.
The obvious way to make money with this is consulting, but I'm not sure why anyone would hire a bunch of philosophy/math/CS types to do the job.
Replies from: arbimote↑ comment by arbimote · 2010-02-02T02:29:54.390Z · LW(p) · GW(p)
People would hire the firm if it could be demonstrated that the firm consistently produced accurate results. So initial interest might be low, but pick up over time as the track record gets longer.
Replies from: Jack↑ comment by Jack · 2010-02-02T02:45:07.122Z · LW(p) · GW(p)
Right, but how do you get started? Begin by giving away the service? Work on spec? What kind of companies/organizations would hire such a firm?
Replies from: LucasSloan, arbimote↑ comment by LucasSloan · 2010-02-02T07:43:29.388Z · LW(p) · GW(p)
Bet on propositions on InTrade. If you are good, you will make money from the exercise, as well as establish crediblility.
↑ comment by arbimote · 2010-02-02T05:03:58.346Z · LW(p) · GW(p)
Perhaps start by giving it away, or sell it to small buyers (eg. individuals).
But I've got to admit I don't have experience in this area, so my suggestions are mostly naive speculation (but hopefully my speculation is of high quality!). Research into existing prediction companies is called for.
comment by JRMayne · 2010-02-01T16:13:32.094Z · LW(p) · GW(p)
Bleg for assistance:
I’ve been intermittently discussing Bayes’ Theorem with the uninitiated for years, with uneven results. Typically, I’ll give the classic problem:
3,000 people in the US have Sudden Death Syndrome. I have a test that is 99% accurate; that is, it will wrong on any given person one percent of the time. Steve tests positive for SDS. What is the chance that he has it?
Afterwards, I explain the answer by comparing the false positives to the true positives. And, then I see the Bayes’ Theorem Look, which conveys to me this: "I know Mayne’s good with numbers, and I’m not, so I suppose he’s probably right. Still, this whole thing is some sort of impractical number magic." Then they nod politely and change the subject, and I save the use of Bayes’ Theorem as a means of solving disagreements for another day.
So this leads to my giving a very short presentation on the Prosecutor’s Fallacy next week. The basics of the fallacy are if you’ve got a one-in-3 million DNA match on a suspect, that doesn’t mean it’s three million-to-one that you’ve got that dude’s DNA. I need to present it to bright, interested people who will go straight to brain freeze if I display any equations at all. This isn’t frequentists-vs.-Bayesians; this is just a simple application of Bayes’ Theorem. (I suspect this will be easier to understand than the medical problem.)
I’ve read Bayesian explanations, but I’m aiming at people who are actively uninterested in learning math, and if I can get them to understand only the Prosecutor’s Fallacy, I’ll call Win. A larger understanding of the underlying structure would be a bigger win. Anyone done something like this before with success (or failure of either educational or entertainment value?)
Replies from: Kaj_Sotala, Peter_de_Blanc, jimmy, komponisto, Vladimir_Nesov, ciphergoth, rehana↑ comment by Kaj_Sotala · 2010-02-01T16:38:34.364Z · LW(p) · GW(p)
For this specific case, you could try asking the analogous question with a higher probability value. E.g. "if you’ve got a one-in-two DNA match on a suspect, does that mean it’s one-in-two that you’ve got that dude’s DNA?". Maybe you can have some graphic that's meant to represent a several million people, with half of the folks colored as positive matches. When they say "no, it's not one-in-two", you can work your way up to the three million case by showing pictures displaying what the estimated amount of hits would be for a 1 to 3, 1 to 5, 1 to 10, 1 to 100, 1 to 1000 etc. case.
In general, try to use examples that are familiar from everyday life (and thus don't feel like math). For the Bayes' theorem introduction, you could try "a man comes to a doctor complaining about a headache. The doctor knows that both the flu and brain cancer can cause headaches. If you knew nothing else about the case, which one would you think was more likely?" Then, after they've (hopefully) said that the man is more likely to be suffering of a flu, you can mention that brain cancer is much more likely to cause a headache than a flu is, but because flu is so much more common, their answer was nevertheless the correct one.
Replies from: Blueberry↑ comment by Blueberry · 2010-02-01T19:22:44.798Z · LW(p) · GW(p)
Other good examples:
Most car accidents occur close to people's homes, not because it's more dangerous close to home, but because people spend most of their driving time close to their homes.
Most pedestrians who get hit by cars get hit at crosswalks, not because it's more dangerous at a crosswalk, but because most people cross at crosswalks.
Most women who get raped get raped by people they know, not because strangers are less dangerous than people they know, but because they spend more time around people they know.
↑ comment by Peter_de_Blanc · 2010-02-01T17:30:41.403Z · LW(p) · GW(p)
If you're using Powerpoint, you might want to make a slide that says something like:
2,999 negatives -> 1% test positive -> 30 false positives
1 positive -> 99% test positive -> 1 true positive
So out of 31 positive tests, only 1 person has SDS.
Replies from: Eliezer_Yudkowsky, Morendil↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T17:58:59.163Z · LW(p) · GW(p)
If you've got the time, use a little horde of stick figures, entering into a testing machine and with test-positive results getting spit out.
↑ comment by Morendil · 2010-02-01T18:11:33.229Z · LW(p) · GW(p)
Your numbers have me confused. I'd read the grandparent as implying 300M total population, out of which 3000 have the disease. (This is a hint to clarify the info in the grandparent comment btw - whether I've made a dire mistake or not.)
Another point to clarify is that the test's detection power isn't necessarily the inverse of its false positive rate. Here I assume "99%" characterizes both.
What I get: 300M times 1% false positive means 3M will test positive. Out of the 3000 who have the disease 30 will test negative, 2970 positive. Out of the total population the number who will test positive is 3M+2970 of whom 2970 in fact have the disease, yielding a conditional probability of .98 in 1000 that Steve has SDS.
Replies from: Peter_de_Blanc↑ comment by Peter_de_Blanc · 2010-02-01T20:31:26.115Z · LW(p) · GW(p)
Your numbers have me confused. I'd read the grandparent as implying 300M total population, out of which 3000 have the disease.
I fail at reading. I thought it said "ONE in 3000 people in the US...."
↑ comment by komponisto · 2010-02-01T17:44:42.618Z · LW(p) · GW(p)
I take it you've already looked at Eliezer's "Intuitive Explanation"?
I think it's really important to get the idea of a sliding scale of evidentiary strength across to people. (This is something that has occurred to me from some of my recent attempts to explain the Knox case to people without training in Bayesianism.) One's level of confidence that something is true varies continuously with the strength of the evidence. It's like a "score" that you're keeping, with information you hear about moving the score up and down.
The abstract structure of the prosecutor's fallacy is misjudging the prior probability. People forget that you start with a handicap -- and that handicap may be quite substantial. Thus, if a piece of evidence (like a test result) is worth, say "10 points" toward guilt, hearing about that piece of evidence doesn't necessarily make the score +10 in favor of guilt; if the handicap was, say, -7, then the score is only +3. If, say, a score of +15 is needed for conviction, the prosecution still has a long way to go.
(By the way, did you see my reply to your comment about psychological evidence?)
↑ comment by Vladimir_Nesov · 2010-02-02T10:51:40.233Z · LW(p) · GW(p)
LW ref: Privileging the hypothesis.
↑ comment by Paul Crowley (ciphergoth) · 2010-02-01T16:52:10.691Z · LW(p) · GW(p)
You have to explain that Steve was chosen randomly for your example to be right.
comment by MrHen · 2010-02-10T22:12:05.905Z · LW(p) · GW(p)
Is there a way to get a "How am I doing?" review or some sort of mentor that I can ask specific questions? The karma feedback just isn't giving me enough detail, but I don't really want to pester everyone every time I have a question about myself.
The basic problem I need to solve is this: When I read an old post, how do I know I am hearing what I am supposed to be hearing? If I have a whole list of nitpicky questions, where do I go? If a question of mine goes unanswered, what do I do?
I don't know anyone here. I don't have the ability to stroll by someone and ask them for help.
Replies from: byrnema, MrHen, ciphergoth↑ comment by byrnema · 2010-02-10T23:27:40.852Z · LW(p) · GW(p)
These are excellent questions/ideas. I want a mentor too!
I thought about contacting you to see if you wanted to start a little study group reading through the sequences. (For example, I started reading through the metaethics sequence and it was useless. My kinds of questions are like, 'What do any of these words mean? What's the implied context? Etc., etc.) But I'm not very good at details, and couldn't imagine any way of doing so. Except maybe meeting somewhere like Second Life so we can chat...
Replies from: Eliezer_Yudkowsky, ciphergoth, MrHen↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-11T02:57:15.789Z · LW(p) · GW(p)
Do consider not starting with the metaethics sequence...
↑ comment by Paul Crowley (ciphergoth) · 2010-02-10T23:30:12.736Z · LW(p) · GW(p)
Scheduled IRC meetings?
Replies from: CassandraR↑ comment by CassandraR · 2010-02-10T23:37:32.193Z · LW(p) · GW(p)
Sounds good to me. I would enjoy being present at a meeting in order to discuss topics from this site.
↑ comment by MrHen · 2010-02-11T00:00:44.320Z · LW(p) · GW(p)
I thought about contacting you to see if you wanted to start a little study group reading through the sequences.
Yeah, actually, I would be willing to do that.
Replies from: byrnema↑ comment by byrnema · 2010-02-11T00:14:25.771Z · LW(p) · GW(p)
Great! And we'll announce when we meet and invite whoever wants to come?
Let's start by doing it one time.
Replies from: MrHen↑ comment by MrHen · 2010-02-11T00:19:12.919Z · LW(p) · GW(p)
Cool. Does IRC work for you? I think I still have a client lurking about somewhere...
And I vaguely remember there being an LW channel at one point. Yep: #lesswrong. And there is a nifty web link in the wiki link. Cool.
EDIT: Yeah, I was wondering about the hhhhhhhhf1. I would have guessed a cat.
Replies from: byrnema↑ comment by byrnema · 2010-02-18T13:17:13.755Z · LW(p) · GW(p)
Countdown: 13 hours
IRC Meeting At Less Wrong:
MrHen and I are meeting at 8:15 p.m. Central for our first IRC Less Wrong study-group session. Please join us -- we will meet here a few minutes before the meeting.
Our topic today is evidence; to discuss the post, How Much Evidence Does it Take? and possibly supporting posts, What is evidence?. Our goal is to build a foundation for discussing Occam's Razor and Einstein's Arrogance.
I'll send out regular announcements closer to the session if there is no recent comment activity here. Please announce if you are planning to attend -- it will encourage others to attend too.
Replies from: MrHen, Jack, byrnema, arundelo, Jack↑ comment by MrHen · 2010-02-19T01:04:31.613Z · LW(p) · GW(p)
Super easy specifics on how to get where we will be: Click on this link and enter a nickname (hopefully something similar to your name here). And that should do it.
All are welcome and you can just lurk if you want. I am there now while I munch on some beans for dinner but the discussion should begin in about an hour.
Replies from: byrnema↑ comment by Jack · 2010-02-19T05:38:58.709Z · LW(p) · GW(p)
So I ended up at the game in person. How did this go? Any insights to share with those of us who weren't there?
Replies from: byrnema↑ comment by byrnema · 2010-02-21T05:55:13.358Z · LW(p) · GW(p)
This is a transcript of the chat log.
In the post, How Much Evidence Does It Take, Eliezer described the concept of 'bits' of information. For example, if you wanted to choose winning lottery numbers with a higher probability, you could have a box that beeps for the correct lottery number with 100% probability and only beeps for an incorrect number with 25% probability. Then the application of this box would represent 2 bits of information -- because it winnows your possible winning set by a factor of 4.
During the chat, we discussed this definition of "bits". MrHen brought in some mathematics to discuss the case where the box beeps with less than 100% probability for the correct number (reduced box sensitivity, with possibly the same specificity), and how this would affect the calculation of bits.
An interesting piece of trivia came up. Measuring information "base 2" is arbitrary of course and instead of measuring bits we could measure "bels" or "bans" (base 10).
Replies from: SilasBarta↑ comment by SilasBarta · 2010-02-21T06:31:39.619Z · LW(p) · GW(p)
Wow, I wish I'd been there for that (had to go to a trade group meeting) -- that's one of the topics that interests me!
Btw, I think you mean that a beep-for-incorrect gives you 2 bits of information. Just applying the box will usually (~75% of the time) not indicate either way. The average information gained from an application of the box (aka entropy of the box variable aka expected surprisal of using the box aka average information gain on using the box) would be ~0.5 bits.
And yes there's also nats (base e).
Replies from: komponisto↑ comment by komponisto · 2010-02-21T06:56:50.165Z · LW(p) · GW(p)
I believe the point was that a beep constitutes 2 bits of evidence for the hypothesis that the number is winning.
↑ comment by byrnema · 2010-02-18T23:10:17.979Z · LW(p) · GW(p)
Countdown: 3 hours till our IRC meeting.
You're welcome to join us.
Replies from: komponisto↑ comment by komponisto · 2010-02-19T01:31:13.784Z · LW(p) · GW(p)
How does one access it? Link?
Replies from: byrnema↑ comment by byrnema · 2010-02-19T01:38:07.860Z · LW(p) · GW(p)
MrHen left these convenient instructions.
↑ comment by Jack · 2010-02-18T13:57:52.190Z · LW(p) · GW(p)
If I'm home I'll log in. But I'm going to be watching basketball at the same time so my participation might not be heavy.
Replies from: wedrifid↑ comment by wedrifid · 2010-02-19T02:18:07.993Z · LW(p) · GW(p)
How much evidence does it take for you to accept 3:2 odds that your team will win the match given your prior understanding of each team's performance at various stages of a game?
Replies from: Jack↑ comment by Jack · 2010-02-19T05:09:42.581Z · LW(p) · GW(p)
So I actually have this idea of doing a series (or just a couple) of top level posts about rationality and basketball (or sports in general). I'm partly holding off because I'm worried that the rationality aspects are too basic and obvious and no one else will care about the basketball parts.
But sports are great for talking about rationality because there is never an ambiguity about the results of our predictions and because there are just bucket-loads of data to work with. On the other hand, a surprising about of irrationality can be still be found even in professional leagues where being wrong means losing money.
Anyway, to answer your question: You get two kinds of information from play at the beginning of the game: First, you get information about what the final score will be from the points that have been scored already. So if my team is up 10 points the other team needs to score 11 more points over the remainder of the game in order to win. The less time remaining in the game the more significant this gets. The other kind of information is information about how the teams are playing that day. But if a team is playing significantly better or worse than you would have predicted coming in, their performance is most likely just noise. Regression to the mean is what should be expected. So my prediction of a team's performance for the remainder of some game is going to be dominated by my priors (which hopefully are pretty sophisticated and based on a lot of data, for college basketball I start here and then adjust for a couple things that can't be taken into account by that model (the way individual players match up against each other, injuries, any information about the teams' mental states, etc.)
If you have all this information you can actually give, at any point during a game, the odds for you winning (there are a couple other factors that need to be considered as well, in particular you need to estimate how many possessions there will be in the rest of the game because the information we have about team performance is per/possession not per minute). I've also ignored fan attendance in this comment but that is really important evidence as well. I ended up attending the game in person and when I arrived I realized the venue included at least as many fans of the other team as there were fans of my team-- and right there the probability my team was going to win dropped by 10%.
↑ comment by Paul Crowley (ciphergoth) · 2010-02-10T23:20:10.802Z · LW(p) · GW(p)
I don't know anyone here. I don't have the ability to stroll by someone and ask them for help.
I'm not an expert either - in fact I'm not sure there is exactly expertise in what you ask - but mail me anytime - Paul Crowley, paul at ciphergoth dot org. Anyone here is very welcome to mail me.
Replies from: MrHen↑ comment by MrHen · 2010-02-10T23:57:48.227Z · LW(p) · GW(p)
Cool. Will do.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-11T08:17:30.549Z · LW(p) · GW(p)
Ace. If you don't get a response prompt me to check my spam filters!
comment by Furcas · 2010-02-07T00:13:08.942Z · LW(p) · GW(p)
I just finished reading Jaron Lanier's One-Half of a Manifesto for the second time.
The first time I read it must have been three years ago, and although I felt there were several things wrong with it, I hadn't come to what is now an inescapable conclusion for me: Jaron Lanier is one badly, badly confused dude.
I mean, I knew people could be this confused, but those people are usually postmodernists or theologians or something, not smart computer scientists. Honestly, I find this kind of shocking, and more than a little depressing.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-07T00:28:49.075Z · LW(p) · GW(p)
The remarkable and depressing thing to me is that most people are not able to see it at a glance. To me it just seems like a string of obvious bluffs and non-sequiturs. Do you remember what was going on in your head when you didn't see it at a glance?
Replies from: Furcas↑ comment by Furcas · 2010-02-07T00:51:43.282Z · LW(p) · GW(p)
It's difficult for me to remember how I used to think, even a few years ago. Hell, when there's a drastic change in the way I think about something, I have trouble remembering how I used to think mere days after the change.
Anyway, one thing I remember is that I kept giving Lanier the benefit of the doubt. I kept telling myself, "Well, maybe I don't understand what he's really trying to say." So the reason I didn't see the obvious would be... lack of self-confidence? Or maybe it's only because my own thoughts weren't all that clear back then. Or maybe because the way I used to parse stuff like Lanier's piece was a lot more, um, holistic than it is now, by which I mean that I didn't try to decompose what is written into more simple parts in order to understand it.
It's hard to tell.
comment by Alicorn · 2010-02-03T05:19:18.885Z · LW(p) · GW(p)
I am becoming increasingly disinclined to stick out the grad school thing; it's not fun anymore, and really, a doctorate in philosophy is not going to let me do anything substantially different in kind from what I'm doing now once I have it. Nor will it earn me barrels of money or do immense social good, so if it's not fun, I'm kinda low on reasons to stay. I haven't outright decided to leave, but you know what they say. I'm putting out tentative feelers for what else I'd do if I do wind up abandoning ship. Can anyone think of a use for me - ideally one that doesn't require me to eat my savings while I pick up other credentials first?
Replies from: bgrah449, Jordan, Unknowns, Eliezer_Yudkowsky, jhuffman, Kevin, JRMayne↑ comment by bgrah449 · 2010-02-03T15:08:03.388Z · LW(p) · GW(p)
Not directly applicable, but perhaps relevant: I was told this advice and found it useful (in that I used it to make important life decisions). "Don't do your passion for a job," she said. "Everyone wakes up one day and hates their job. Don't wake up one day and hate what you love. Do something you like that you're at."
Also, I don't remember who told me this or if I made it up, but I've relayed it to people: Don't look for fulfillment from your job. Don't go for the highest peaks; just try to avoid the lowest valleys.
Replies from: orthonormal↑ comment by orthonormal · 2010-02-07T22:39:13.268Z · LW(p) · GW(p)
That's a rather interesting idea, and I wonder if there's any way to test it. It certainly accords with my experience— I'm pretty happy as a mathematician whose passion is more about arguing than about math. (I've started an occasional argument society, which is generally the highlight of my month.)
The reason this works (to whatever extent it works) probably boils down to status, and the fact that in a big world, everyone rises until they get introduced to the level above theirs. If math were my passion, I'd constantly be comparing myself to people better at it than I am, and I'd probably be miserable about it. (Even as it is, this part stings subconsciously.) But instead, I have a good niche in multiple social worlds: my colleagues think it's neat that I have these well-argued contrarian ideas about all kinds of topics, while my other friends are impressed by the fact I'm getting a PhD in math.
Of course, the same factor works with my other hobby (dancing), despite that being neither my vocation nor my passion. I think the takeaway lesson here is that we're not really happy unless we're at the top in our social niche, and that the best way to achieve this in a big modern world is to have multiple independent specialties...
↑ comment by Jordan · 2010-02-03T06:02:19.973Z · LW(p) · GW(p)
I think everyone in grad school has these moments, sometimes for prolonged stints. In the math world they seem to be suggestively correlated with making progress on research =p
Personally though, even when everything is going well in research, I still feel the same nagging sensation that I should either be out making butt-loads of cash or helping humanity (or helping humanity by donating butt-loads of cash).
Reasons I've stuck it out so far:
- I am absolutely terrified of a life of mediocrity. I don't want to end up in a cubicle.
- Academia is a good place to consistently meet reasonably intelligent people
- Setting your own schedule is pretty awesome
That said, I'm still not sold on it. I took 6 months off last year to try and found my own company. I'm still moonlighting it, and hoping I can get it to the point where I know it will fly or not before having to commit to a post doc position.
Replies from: Alicorn↑ comment by Alicorn · 2010-02-03T14:01:30.797Z · LW(p) · GW(p)
I'm not particularly terrified of mediocrity as long as it's not unsafe mediocrity. The cubicle doesn't appeal to me, but, say, I think I could be a pretty happy house spouse. As for meeting intelligent people, sure, they're around in academia, but I'm more interested in meeting smart people who I'd have some inclination to interact with socially, and the Internet seems, in practice, to be better for that. And I'm not setting my own schedule - I'm still doing my own coursework, and would have a couple more semesters of that to go even assuming a best case scenario.
Replies from: Jordan↑ comment by Jordan · 2010-02-04T09:45:38.133Z · LW(p) · GW(p)
House spouse doesn't have to be a mediocre life. In fact.. it could more or less be the best 'job' ever. It's like a tenured professorship where you actually get to study and research whatever you want!
Huh. I hadn't though of it before, but I'm going to have to add house spouse to my list of acceptable future paths.
Replies from: Alicorn↑ comment by Unknowns · 2010-02-03T05:54:08.764Z · LW(p) · GW(p)
Why isn't it fun?
Replies from: Alicorn↑ comment by Alicorn · 2010-02-03T13:59:02.227Z · LW(p) · GW(p)
In a nutshell: The environment is unsupportive and draining. The only teacher I "click" with is in a sub-field of study that I have next to no interest in, and even if I wanted to go study his topic to get to work with him, he's leaving at the end of this semester. Meanwhile, the teachers who work on the subjects I like whom I've completed courses with seem to actively dislike me. I don't think I'm a good fit for the department in general, which is uncomfortably political and stern, and trying to transfer would be hard because, having stuck it out this long, I've collected some less than admirable grades.
Replies from: komponisto, wedrifid↑ comment by komponisto · 2010-02-06T20:07:08.447Z · LW(p) · GW(p)
It sounds like you should try to transfer anyway.
I was surprised to read your comment above; I had always gotten the impression that you enjoyed what you were doing. (I also liked the idea that one of LW's top contributors was a philosophy grad student; it helps to counteract a slight tendency toward rivalry that I detect between "LW types" and academic philosophers.)
How about attempting to get in touch with people you think you would get along with elsewhere, and seeing if you can impress them?
Replies from: Alicorn↑ comment by Alicorn · 2010-02-06T20:09:39.355Z · LW(p) · GW(p)
I try to cultivate a cheerful attitude, which often projects. It failed me this semester, so I'm abandoning ship. You'll need to rely on thomblake for your philosophy grad student needs.
I might or might not try to resume my studies at a later date, but for now, I'm going to spend a month at the SIAI and see if they want to keep me :)
↑ comment by wedrifid · 2010-02-03T14:07:35.285Z · LW(p) · GW(p)
Wow. I keep forgetting PhD students over there do actual coursework. research only over here. Apart from that little difference I can empathize somewhat with graduate programs becoming draining, particularly when the process becomes predominately political.
Replies from: wnoise↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-03T07:39:02.376Z · LW(p) · GW(p)
How can we possibly know what your comparative advantage is, better than you do? In all seriousness, a certain amount of background information seems to be missing here.
Replies from: wedrifid, Alicorn↑ comment by jhuffman · 2010-02-03T13:12:30.388Z · LW(p) · GW(p)
I can't really answer your question - not knowing your skills. IT for medium to large corporations is still a pretty good field in some respects and you can advance very quickly if you are smart, regardless of college education. Beginner level certs to get a first level helpdesk job would cost only a couple hundred dollars and take you a month or less to study up for - assuming you have any "knack" for computers. It does turn you into a cubicle drone though. This is the route I took...it pays very well but I do hate it after twelve years of it I feel sort of trapped - we're somehow dependent on this level of income etc. There was a period of time where I really was interested in the work itself (I do software development) but thats really long passed. Writing business applications gets pretty old after awhile and the new toys the vendors come out with every two years are just a way to re-sell the same solution to the same problems...
My sister (2.5 years younger) and her husband both have PHD's from Notre Dame in english literature - roughly as useful as a philosophy degree I'd guess. I guess they did ok finding jobs on tenured tracks; their life is teaching, writing papers, going to conferences. They complain about stupid department politics, stupid field of study politics, stupid useless papers they have to grade etc. I don't really think they are a lot happier with their jobs than me.
Replies from: Alicorn↑ comment by JRMayne · 2010-02-03T06:50:49.827Z · LW(p) · GW(p)
What do you do now? What do you like doing?
Replies from: Alicorn↑ comment by Alicorn · 2010-02-03T14:05:00.413Z · LW(p) · GW(p)
Right now, I'm taking classes, only one of which I chose for the topic instead of because I thought the teacher would be more likely to be sympathetic. I like drawing and cooking and reading and writing and talking and teaching (which they're not letting me do this semester because there were limited spots and I got one last semester) and learning things that are useful in some way, even if only in organizing my own thoughts or intriguing interlocutors.
Replies from: JRMayne↑ comment by JRMayne · 2010-02-03T14:56:43.929Z · LW(p) · GW(p)
How much more grad school do you have to go to your degree? This sounds like a profile of a teacher at some level, probably high school or college. The degree makes college an option. High school teaching may be more enjoyable for you; I don't know.
If you're a year away from your PhD, it probably makes sense to stick it out. If it's three years... three years is a long damn time to be unhappy somewhere.
Replies from: Alicorn↑ comment by Alicorn · 2010-02-03T15:09:04.048Z · LW(p) · GW(p)
The exact amount of time isn't fixed, but it taking less than three years would be surprising. I like the idea of teaching, maybe art or (at a sufficiently quirky school) logic/critical thinking, but don't have a certification and it looks like they take a long time to get.
comment by Liron · 2010-02-01T07:06:39.338Z · LW(p) · GW(p)
Mind-killing taboo topic that it is, I'd like to have a comment thread about LW readers' thoughts about US politics.
Replies from: Daniel_Burfoot, Larks, ciphergoth, ata, mattnewport, Liron, Liron, Liron↑ comment by Daniel_Burfoot · 2010-02-01T14:30:47.380Z · LW(p) · GW(p)
I recall EY commenting at some point that the way to make political progress is to convert intractable political problems into tractable technical problems. I think this kind of discussion would be more interesting and more profitable than a "traditional" mind-killing political debate.
It might be interesting, for example, to develop formal rationalist political methods. Some principles might include:
- Always conduct a comprehensive fact-gathering phase before beginning any policy discussion.
- Develop techniques to prevent people from becoming emotionally committed or status-linked to positions.
- Subject every statement to formal logical analysis; if the statement fails any obvious rule of logical inference, the statement is deleted and its author censured.
- Rigorously untangle webs of inference. A statement arguing against the death penalty should involve probability estimates of the number of crimes the penalty does (or does not) deter, the cost of administering it, etc, and connect these estimates to a global utility function. The statement must include analyses of how the argument changes in response to changes in the underlying probability estimates.
↑ comment by Larks · 2010-02-01T14:31:49.820Z · LW(p) · GW(p)
I disagree; discovering that someone holds political views opposed to yours can inhibit your ability to rational consider arguments; arguments become soldiers, etc.,
Besides, I think the survey from ages ago showed the general spread of political views, and I doubt much has changed since. For discussing particular issues, there are other places available, and it may be that only by not discussing hot topics can we keep the barriers to entry up that keep the LW membership productive.
Replies from: magfrump, ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-01T14:38:29.315Z · LW(p) · GW(p)
But the quality of discussion here is generally much higher than elsewhere. I would like us to try discussing politics and see how it goes - but I'd prefer a new toplevel post to an Open Thread discussion.
Replies from: Larks↑ comment by Larks · 2010-02-01T22:53:24.296Z · LW(p) · GW(p)
I think the quality of discussion is higher because we don't discuss politics: if we started, we'd pull in political trolls and fanatics. If you consider how common political discussion sites are, and what a city on a hill LW is, I'd be very conservative about anything that might open the gates. We have rarity value, and it could be hard to re-gain.
Perhaps a minimum karma level to discuss politics?
Replies from: Jack, ciphergoth↑ comment by Jack · 2010-02-02T00:11:30.101Z · LW(p) · GW(p)
This is a special case of a general problem. There are lots of solutions, it just doesn't seem likely that any will be implemented (unless, as rumor has it, there is already a secret forum to discuss other subjects that less wrongers are only invited to when they have proven themselves).
Also, I'm not sure that just saying: "Hey people! Talk about politics over here. " is going to lead to a great discussion. I'd be much more interested in a discussion of how and where what we have in common as rationalists should affect our political views. It seems likely that we all ought to be able to come to important but limited agreements (about how to think about policy, about how the policy making process should be organized, and about a select few policy issues- religious issues, science, maybe a few more) from which we could expand to other areas, constructively. Maybe we all end up as 'liberaltarians' maybe not. But there needs to be a common starting point or everyone will just default to signaling, talking points and rhetorical warfare.
Replies from: Larks↑ comment by Larks · 2010-02-02T00:39:49.101Z · LW(p) · GW(p)
That's the post I was trying to find, and failing. However, if there is such a conspiracy (beyond simply random chats between clever people) it's either quite small or not done by Karma or you (with nearly exactly 10 times my karma count) would have been invited.
I have a parallel problem at University: trying to find discussion groups, debating societies, etc. where people agree enough on the basics and are interested in the truth, are small enough that signalling isn't too great a problem and entery is suffiently easy for me to be able to speak and yet large enough to self-perpetuate.
Maybe Econlog or somesuch should create a LessWrong Parallel?
↑ comment by Paul Crowley (ciphergoth) · 2010-02-01T23:32:05.844Z · LW(p) · GW(p)
It's probably not worth discussing ideas that require code changes unless you're in a position to implement them and present patches, and even then it may not be accepted.
I think we fend off trolls pretty well: we tend to just vote them down and otherwise ignore them. I don't think we have to worry about a troll invasion here.
Replies from: Larks↑ comment by Larks · 2010-02-02T00:33:27.224Z · LW(p) · GW(p)
Equally, I don't think it's worthwhile discussing drastic subject-matter changes, partly becuase that is the level of change that would be required to affect it safely.
At the moment, trolls are both in the minority, and both their views and presentation differ markedly from ours: whether by Aumann or Groupthink, we have both a large set of beliefs we agree on that aren't widely held outside LW, and a special terminology that we use.
However, in Politics none of these would be the case; widespread disagreement makes it hard to tell what is in good faith, we don't have a specialised language, and without a rigourous way of approaching the problems, are unlikely to reach a closer set of conclusions than any other fairly Libertarian internet grouping.
↑ comment by Paul Crowley (ciphergoth) · 2010-02-01T08:37:37.710Z · LW(p) · GW(p)
I'd prefer a top-level post. They're cheap and this could get busy.
You could literally post just this.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2010-02-02T05:45:07.486Z · LW(p) · GW(p)
If a top-level post is made of this, then make it about politics in general, not just US politics. (As a member of a controversial political movement, I'd be curious to hear what people's opinions on current copyright law here are.)
Replies from: Kevin, RobinZ↑ comment by Kevin · 2010-02-02T05:56:39.218Z · LW(p) · GW(p)
I'm an intellectual property abolitionist, which makes my views much more extreme than the Pirate Party, though I'm aware that they have watered themselves down for pragmatic reasons and that the founders are most likely IP abolitionists.
I'll wait for the top level post though... I'd post it myself but figure I should finish Politics is the Mind Killer first.
I have a nearly unlimited amount of viewpoints on political matters, but more and more I think it's almost irrelevant. Politics seems like this kind of fun thing where we can have infinitely many new and continuing arguments, but this arguing is never going to accomplish anything. I'm not a senator, and even senators quickly become jaded and cynical at how little actual power their high status provides.
Replies from: Morendil, Torben↑ comment by Morendil · 2010-02-02T08:50:35.338Z · LW(p) · GW(p)
Maybe we could turn the discussion to "how might a community of rationalists actually accomplish something, re. this or that issue"?
Replies from: Kevin↑ comment by Kevin · 2010-02-02T08:56:01.841Z · LW(p) · GW(p)
I think the answer is most likely that we can't. I'd be willing to have a discussion potentially leading us to that conclusion. I'll put it in my too-long queue of top-level posts to write...
The guy who wanted to start a polling firm might have a good idea, but I think if Nate Silver hasn't started his own polling firm yet we probably aren't going to.
Historically I've stayed away from political activism, but I got involved with a group trying to raise awareness about the police assaults on the University of Pittsburgh after the G20 summit. I thought it was a small enough issue that we could make a difference, but obviously we didn't. While I give the posters here a little more credit for being able to get things done than my leftist friends in Pittsburgh, I have no practical ideas for how we could actually accomplish something not at the meta-level.
Probably the best thing we could do is try to spread some of the memes raised in Politics is the Mind Killer.
Replies from: Torben, nawitus↑ comment by Torben · 2010-02-06T12:28:45.900Z · LW(p) · GW(p)
Get control of the educational system.
I don't know how to feasibly do that except by convincing a bunch of people (future teachers) which reduces to the initial question, how to change stuff (= how to convince people). Sorry.
But at least future teachers are a smaller group of people than a majority of voters.
↑ comment by nawitus · 2010-02-03T11:18:04.985Z · LW(p) · GW(p)
Well, the mentioned Pirate Party is an example of succesfull political activism. Our party is already doing politics even before our first national elections, since the party often gives out statements on new legislation as requested by the justice ministry. Our neighbour parties in Sweden and Germany are even more succesful. And many of the lesswrong/transhumanist people are active in the Finnish Pirate Party.
↑ comment by RobinZ · 2010-02-02T12:48:46.729Z · LW(p) · GW(p)
I believe my views were formed largely based on Macaulay: terms on copyright should be short (no longer than 30 years, I would say), and I take a liberal view on derivative works. There are also interesting things to say about orphaned works.
↑ comment by ata · 2010-02-02T11:18:25.053Z · LW(p) · GW(p)
I think one thing we could discuss without wandering onto a minefield is political mechanisms — discussions of ways we can make the system (legislative procedures, division of power, voting systems, etc.) more rational, without discussing specific policies.
We would still have to be careful, as even this depends on certain subjective goals — what do we want the political system to do, ultimately? — but that itself could be an interesting meta-discussion. However, it's a discussion we'd probably have to have before we even start talking about ideal political mechanisms, because we need to agree on what we want a political system to accomplish (that is, what an ideal policy-making system would look like, and how it would acquire and realize values, keeping in mind that it'll have to be run mostly by humans for the time being) before we can start understanding how it might work.
And writing that paragraph made me realize a meta-meta-discussion that might also be necessary: is it even possible to separate policy goals from political structural goals? Maybe it is, but it could be difficult. The practical outcome of a direct democracy, a representative democracy, a futarchy, and a dictatorship will all be significantly different, yet in somewhat predictable directions, so even if we banish all policy discussion, we'd need to figure out how to uncover and squash any bias that could make us prefer certain abstract political systems because of actual specific policy goals.
Or maybe we're not interested in doing that in the first place — maybe you're satisfied with supporting systems of government that are simply most likely to result in your own values being fulfilled, in which case your ideal system would be a dictatorship run by you (or the system that's the best at approximating the same), unless you value democracy/pluralism itself more strongly than anything you could achieve as dictator.
And I think I'll stop musing here, before this post becomes an infinite regress of paragraphs deconstructing their predecessors. My original point was going to be that discussing rational systems of government could be less mind-killing than discussing specific policies and politicians and parties, but now it appears it might not be any less complicated.
Replies from: blogospheroid, Kevin↑ comment by blogospheroid · 2010-02-02T11:49:15.827Z · LW(p) · GW(p)
Talking at a meta level, I like Futarchy's split between values and policies to achieve them.
That is a very useful split which can be adopted even in non-futarchic governments.
For eg. It is an obvious moral thing to take into account everybody's values. Universal franchise for values.
It is not so obvious to take into account everyone's opinion about how to achieve the same equally seriously. Simply because of differing expertise.
↑ comment by mattnewport · 2010-02-02T00:51:43.554Z · LW(p) · GW(p)
I'd be more interested in an initial discussion of whether it is in fact rational to discuss politics (except to the extent that you gain intrinsic enjoyment from the discussion). It is clear that for most people in most elections their vote is irrelevant (the chances of it making any difference are negligible). This suggests that time spent discussing politics for the purposes of deciding how to vote is wasted and such discussion is irrational. Arguably the only people who rationally devote any significant time to thinking or talking about politics are the small group of people who actually make their living as politicians or political commentators. Robin Hanson has often made the point that politics is not about policy - it is mostly about signalling status and in-group/out-group dynamics. What would we be hoping to achieve by discussing politics at less wrong?
Replies from: blogospheroid, LucasSloan↑ comment by blogospheroid · 2010-02-02T12:00:23.280Z · LW(p) · GW(p)
How about a more reasonable topic to discuss - Corporate Organizational Design for a seastead.
You are starting a seastead with certain ideas on how to make money in the long run. How do you make a structure that is better than present governments or corporations?
Corporate design is much simpler than already present nation design.
Also, a good design emerging from this will theoritically be better than any political design in today's nations, since a seastead by definition starts with a huge economic disadvantage.
Why would LW want to discuss this - A well run corporation might be the closest thing in the present world to a superintelligence.
Lets discuss.
↑ comment by LucasSloan · 2010-02-02T00:59:44.642Z · LW(p) · GW(p)
What would I hope to accomplish? I would hope we could come up with policy proposals which might be cheap to enact.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-02T01:10:34.859Z · LW(p) · GW(p)
But what would be the use of that? Do you have the ear of the president? Do you have reason to think that the problem with politics is a lack of good policy ideas rather than the inability of the political process to enact good policy? Are you prepared to devote yourself full time to promoting whatever wonderful never-before-considered policies the great minds of less wrong are able to concoct? Politics is not about Policy.
Replies from: LucasSloan, Jack↑ comment by LucasSloan · 2010-02-02T01:25:41.294Z · LW(p) · GW(p)
If we were to decide to discuss politics, the best possible use I can think of is to generate strategies for implementing (cheap) positive changes in policy. As to how to implement, California State Senator Joe Simitian has his There Oughta be a Law Contest.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-02T01:45:36.350Z · LW(p) · GW(p)
the best possible use I can think of is to generate strategies for implementing (cheap) positive changes in policy.
Competitive government seems about the best hope for this to me, though I rate its chances of success pretty low it seems slightly less hopeless than fixing conventional politics.
The real question is whether you think discussing politics is an effective use of your time. I'm more interested in discussions where ultimately I can take concrete actions that deliver the most expected value possible. Politics doesn't generally seem like such a topic.
↑ comment by Jack · 2010-02-02T01:24:56.204Z · LW(p) · GW(p)
I'm similarly skeptical about the benefits of a conversation about politics but lets not overgeneralize. Politics is not about policy. Except when it is. Certain parts of government are more amenable to policy changes than others. The key is identifying those areas and organizing around them. Change is usually easiest in areas where there aren't entrenched interests influencing legislators, where the general public doesn't feel strongly one way or the other, and when legislators aren't running for reelection or aren't at risk of losing. Areas where I think Less Wrong could make non-trivial impacts: federal science policy-- specifically stream-lining the grant process to save scientists time and resources, and local public school curriculum-- specifically finding some amenable school districts and try to improve/add to/create critical thinking/classical rationality curricula.
If people were interested I'd be especially interested in digging into the second.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-02T01:38:29.214Z · LW(p) · GW(p)
Change is usually easiest in areas where there aren't entrenched interests influencing legislators, where the general public doesn't feel strongly one way or the other, and when legislators aren't running for reelection or aren't at risk of losing.
...
local public school curriculum
Umm, really?
Replies from: Jack↑ comment by Jack · 2010-02-02T01:48:20.352Z · LW(p) · GW(p)
Like I said:
where the general public doesn't feel strongly one way or the other
A similar way of saying the same thing: change gets easier when debates don't map onto pre-existing signaling narratives. Obviously anything that explicitly threatens religion is going to be a bitch to get through. I don't think critical thinking course in liberal districts would raise a lot of ire even if we were giving students tools that, properly applied, would tell them something about their religious beliefs.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-02T01:53:28.121Z · LW(p) · GW(p)
I think local public school curriculum fails on two of your criteria: 'entrenched interests influencing legislators' (teachers' unions, publishers of textbooks, parents' groups, think tanks, etc.); 'where the general public doesn't feel strongly one way or the other' (parents tend to care quite a bit about what/how their kids are taught, ideologically motivated groups care quite a bit about what kids are taught, many interest groups have opinions about what focus education should have). There are already lots of groups trying to influence education in all kinds of ways, including local public school curriculums.
Replies from: Jack↑ comment by Jack · 2010-02-02T02:32:27.311Z · LW(p) · GW(p)
(teachers' unions, publishers of textbooks, parents' groups, think tanks, etc.)
Teachers unions are definitely an entrenched interest but they aren't really entrenched on the issue of curriculum. I'm not trying to fire them, just add another elective class or change a couple of class days in the English curriculum. Textbook publishers sure, but they don't have necessarily opposing views. You could just as easily turn them into allies. Parents groups, think tanks? I would start in a poor or urban district-- but I can't think of any reason parents groups would oppose a critical thinking elective in liberal, wealthy districts.
Obviously all policy areas have someone 'invested'. But it isn't like getting rid of subsidies for the sugar industry, ending teacher tenure or limiting unionizing.
parents tend to care quite a bit about what/how their kids are taught, ideologically motivated groups care quite a bit about what kids are taught, many interest groups have opinions about what focus education should have
These groups care about curriculum when the debate involves sex or religion. Thats about it. I'm not trying to teach 2nd graders about sex or tell anyone their religion is false. Aspects of critical thinking are already part of the AP Language curriculum-- we're not talking about some radical transformation of the school system. Around half the parents at my public high school were lawyers, you're gonna tell me they're going to object to a critical thinking class?
Again, obviously people are affected by policy. But not every issue makes people go crazy like evolution, sex or money. I'm actually surprised you picked the curriculum issue to criticize... reforming the government grant-giving bureaucracy strikes me as a lot harder.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-02T02:38:07.130Z · LW(p) · GW(p)
I'm actually surprised you picked the curriculum issue to criticize... reforming the government grant-giving bureaucracy strikes me as a lot harder.
You may well be right, but I know very little about grant-giving so I didn't address it. I imagine there are a number of powerful interest groups involved there as well however.
↑ comment by Liron · 2010-02-01T07:11:57.136Z · LW(p) · GW(p)
What do you think President Obama should focus on? And do you think he has been squandering the bully pulpit?
Replies from: ChristianKl↑ comment by ChristianKl · 2010-02-01T11:33:11.108Z · LW(p) · GW(p)
I honestly don't really understand the question. A president should be able to push several different agendas at the same time.
↑ comment by Liron · 2010-02-01T07:10:51.684Z · LW(p) · GW(p)
Thoughts on Democrats and Republicans?
My impression is that Democrats have much more intellectually honest, serious public discourse, although that's not saying much.
Replies from: LucasSloan, Jayson_Virissimo↑ comment by LucasSloan · 2010-02-02T00:19:13.315Z · LW(p) · GW(p)
My usual response to this question is that the average Democrat is better than the average Republican, but the very best Republicans are better than the very best Democrats. However, given that my model of the "average Democrat" is the average person in the Bay Area, and my model of the "average Republican" is some mix of Fox news wacko and George W. Bush, I'm not sure I should trust this. Does anyone have any anecdotes about Democrats out side of the Bay Area? Republicans?
↑ comment by Jayson_Virissimo · 2010-02-01T17:49:42.176Z · LW(p) · GW(p)
Have you witnessed any actual discourse in person, or are you relying on the news media to obtain information on this topic? If so, you should expect that if the news media is biased, your view will be biased as well (if you haven't already corrected for this).
Replies from: Liron↑ comment by Liron · 2010-02-01T18:36:36.619Z · LW(p) · GW(p)
By "public discourse" I did mean things like talking points and media interviews. I'm sure many republicans have extremely intelligent private conversations over policy, e.g. Hank Paulson.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-01T18:51:25.361Z · LW(p) · GW(p)
I imagine most of Hank Paulson's private policy conversations revolved around devious new schemes to funnel more billion dollar backdoor bailouts to Goldman Sachs.
Replies from: Kevin↑ comment by Kevin · 2010-02-02T06:02:06.235Z · LW(p) · GW(p)
Was this downvoted for conspiracy theory-ing or because an actual majority of Hank Paulson's private discussions weren't really about how to steal money? I agree that Paulson couldn't have spent a majority of the time discussing how to funnel money to his friends and comrades, but it seems reasonably well established that some of the financial meltdown conspiracy theories are true.
↑ comment by Liron · 2010-02-01T07:07:53.694Z · LW(p) · GW(p)
What good things can be said about G. W. Bush?
Replies from: ciphergoth, wedrifid, NancyLebovitz, Kevin, CarlShulman, Jayson_Virissimo↑ comment by Paul Crowley (ciphergoth) · 2010-02-01T08:35:22.110Z · LW(p) · GW(p)
He hugely increased African aid and foreign aid in general (though with big deadly strings). That came as a big surprise to me.
Replies from: Matt_Simpson↑ comment by NancyLebovitz · 2010-02-01T21:08:07.852Z · LW(p) · GW(p)
As a result of the conquest of Iraq, water was let into the marshes which Saddam Hussein had been letting dry out. This is a clear environmental win.
↑ comment by Kevin · 2010-02-02T06:10:42.024Z · LW(p) · GW(p)
The war in Iraq was the beginning of the end of US hegemony.
Replies from: Torben↑ comment by Torben · 2010-02-06T12:20:48.973Z · LW(p) · GW(p)
I think Dubyah definitely began the end of US hegemony (which I see as a bad thing), but probably in larger part because of his devastation of the US economy and the placement of a gigantic US debt into the hands of its sole future strategic rival.
Replies from: Kevin↑ comment by Kevin · 2010-02-06T22:12:19.364Z · LW(p) · GW(p)
Yeah, it was the destruction of the economy that made him the second worst US President (after Lincoln). All of those things contributed to the end of US Hegemony, I meant that the war in Iraq functions as a potent symbol of the end of that power.
Our disagreement is about the meaning and purpose of globalization; let's not get into that discussion right now, it'll take a while.
↑ comment by CarlShulman · 2010-02-01T15:48:34.464Z · LW(p) · GW(p)
Millions of lives saved in Africa through expanded public health.
↑ comment by Jayson_Virissimo · 2010-02-01T17:46:38.305Z · LW(p) · GW(p)
He didn't increase the projected level of debt for the US as much as the current president.
Replies from: nawitus↑ comment by nawitus · 2010-02-01T18:13:24.123Z · LW(p) · GW(p)
You can't compare those, because the economic crisis happened mostly after Bush. Large debts have been taken by pretty much all Western nations.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2010-02-01T18:32:55.242Z · LW(p) · GW(p)
You can compare those, because the large debts weren't caused by the "economic crisis". The fact that most Western nations also ran up debt doesn't mean the economic crisis caused the debt increase, only that they chose the same response to the economic crisis (which probably has more to do with increasing their own discretionary power than with lowering unemployment).
Singapore didn't run up huge levels of debt and has a much lower unemployment level than the countries that did run up debt. They could have chosen otherwise, but didn't.
Replies from: nawitus↑ comment by nawitus · 2010-02-02T18:37:07.512Z · LW(p) · GW(p)
Singapore isin't a Western nation or a fully developed on, and they have extremely high economic growth (around 10%), so that's not comparable to stable Westerne economies. Singapore had economic growth of 1.1% during 2008, so they didn't have to loan anything in that year.
In fact, a quick search showed that Singapore had significant budget deficit for 2009: "-- 2009/2010 budget deficit to be 6 pct of GDP, before accounting for transfers. " So it seems Singapore has used their national reserves immediately after their economy fell, just like all the other Western nations. They don't have to take a loan because they have significant national reserves.
Although it's true that Obama has increased spending more than Bush, even if he didn't increase it (inflation adjusted) at all, the U.S. would have taken a significant loan, just like all the other Western nations, as tax income dropped for probably all of them.
Furthermore, economic crisis did indeed cause large debts, because it caused the tax income for the state to drop, and the rest was loaned because Western nations do not wan't to reduce spending. Although nothing seems to have consensus in economics, many economists made the decision not to cut spending, which can make the economic crisis even worse. I think that was even a common agreement amongst most Western nations.
Summing up, your claim that large debts are a bad thing in this situation has not been proved at all. Although I'm not an expert in economics, there's no consensus for that claim in science.
Replies from: Jayson_VirissimoSingapore, 22 Jan. S$20.5b (US$15b) might not sound like a lot of money in these days of trillion dollar collapses, but when it represents 6% of GDP (estimated at US$227b in 2007), then it becomes one of the most aggressive stimulus plans on a per capita basis in the planet.
↑ comment by Jayson_Virissimo · 2010-02-02T18:59:38.001Z · LW(p) · GW(p)
Singapore isin't a Western nation or a fully developed on, and they have extremely high economic growth (around 10%), so that's not comparable to stable Westerne economies.
Whether Singapore is considered "Western" or not is irrelevant. The disagreement was over whether the "economic crisis" forced the current US Government to run up large amount of debt. Singapore shows that not only is it possible to face a global economic crisis without running up large amounts of debt, but that doing so can leave you better off in terms of unemployment. And to claim that Singapore isn't a "developed" nation is quite strange. Singapore has a per capita GDP of $50,300, while the US only has a per capita GDP of $46,400, Germany has a per capita GDP of $34,200, and France has a per capita GDP of $32,800. Are you going to argue that the US, Germany, and France aren't fully developed?
Furthermore, economic crisis did indeed cause large debts, because it caused the tax income for the state to drop, and the rest was loaned because Western nations do not wan't to reduce spending.
The economic crisis only caused large debt increases if going out to eat everyday causes me to take on debt (because I refuse to cut back elsewhere in my budget). The fact remains that there were viable alternatives to multiplying the debt (alternatives that actually worked better in the case of Singapore).
Although nothing seems to have consensus in economics, many economists made the decision not to cut spending, which can make the economic crisis even worse. I think that was even a common agreement amongst most Western nations.
The fact that Western nations listened to the economists that told them that current events justifies them increasing their own discretionary power and ability to give handouts to their allies instead of listening to economists that told them otherwise doesn't surprise me one bit.
Replies from: nawitus↑ comment by nawitus · 2010-02-02T20:29:26.158Z · LW(p) · GW(p)
I posted a link that showed Singapore had a budget deficit the very second their economy shrinked, in fact, the same thing happened in Western nations. Singapore didn't have to take a loan because thay had a national reserve.
So in fact the policy Singapore has is the same as Western nations, with the only difference that Singapore happened to have money saved. Singapore didn't want to cut spending to they used their savings. There's no real difference in policy, they even have a stimulus package.
Replies from: SilasBarta, Jayson_Virissimo↑ comment by SilasBarta · 2010-02-02T21:00:33.602Z · LW(p) · GW(p)
So in fact the policy Singapore has is the same as Western nations, with the only difference that Singapore happened to have money saved.
How do you get that as being a coincidence? The very same things that make a nation spend prudently are the ones that make it have a reserve fund in the first place! What's America's emergency reserve fund? There isn't one -- just the possibility of borrowing more. (Not necessarily a bad move for a nation with the US's credit rating, but still.)
I bring this up in part because it parallels the differences between US states. Some states had to get backdoor bailouts through grants for projects, while others (like Texas) only had the budget problem of "couldn't contribute as much to the rainy day fund (a real account) this time". The very concept is foreign to e.g. California.
Yeah, yeah, mind = killed, etc.
↑ comment by Jayson_Virissimo · 2010-02-02T20:36:54.420Z · LW(p) · GW(p)
I see, I don't remember any of that being in the post I replied to (perhaps you edited your post?). I see how that article supports your view that Singapore did engage in "economic stimulus". My (mis)perception comes from the fact that I was only looking at the change in the debt level, when they paid for their "stimulus package" out of savings (so didn't show up as much increase in debt).
On the other hand, I think my judgment that Singapore responded better than the US to the economic downturn is still well supported. Their Stimulus was much more focused on lowering the cost of hiring workers than the US stimulus package and for that the current administration deserves some blame. Don't you agree?
comment by GreenRoot · 2010-02-04T19:02:04.557Z · LW(p) · GW(p)
How about per-capita post scoring?
Why not divide a post's number of up-votes by the number of unique logged-in people who have viewed it? This would correct for the distortion of scores caused by varying numbers of readers. Some old stuff is very good but not read much, and scores are in general inflating as the Less Wrong population grows.
I think such a change would be orthogonal to karma accounting; I'm only suggesting a change in the number displayed next to each post.
Replies from: denisbider↑ comment by denisbider · 2010-02-05T01:44:11.070Z · LW(p) · GW(p)
For posts, this might work.
For comments, these are loaded without most readers reading them. Furthermore, the likelihood that any single comment will be read decreases with the number of all comments. It seems like this would work much less well for comments.
comment by Wei Dai (Wei_Dai) · 2010-02-03T23:40:41.532Z · LW(p) · GW(p)
I'd like to draw people's attention to a couple of recent "karma anomalies". I think these show a worrying tendency for arguments that support the majority LW opinion to accumulate karma regardless of their actual merits.
- Exhibit A. I gave a counterargument which convinced the author of that comment to change his mind, yet the original comment is still at 14.
- Exhibit B. James Andrix's comment is at 20, while Toby Ord's counterargument is at 3. This issue is still confusing to me so I can't say for sure that Toby is right and James is wrong, but I think Toby has the stronger argument, and in any case I see no way that 20 to 3 is justified on the merits.
ETA: Please do not vote down these comments due to this discussion. My intention is to find a fix for a systemic problem, not to cause these particular comments to be voted down.
Replies from: Unknowns, mattnewport, Jack, wedrifid, wedrifid↑ comment by Unknowns · 2010-02-04T07:57:59.953Z · LW(p) · GW(p)
I've noticed in general that later replies to comments get less votes; it's possibly because fewer people are still reading. Support for this is that your comment on the other thread already has 7 points, and all of this should go to Toby Ord.
Also, in Exhibit B, James Andrix gave an example attacking religion, which likely got him some votes (for attacking the Great Enemy), and since Toby Ord didn't support his argument, this probably stopped him from getting votes, since by so doing, he defended the Enemy, which is treason.
Replies from: Unknowns↑ comment by mattnewport · 2010-02-03T23:50:26.986Z · LW(p) · GW(p)
By 'anomaly' you appear to mean 'not the scores I would have assigned'. That's not the way karma works.
Replies from: bgrah449, Wei_Dai↑ comment by bgrah449 · 2010-02-03T23:55:49.608Z · LW(p) · GW(p)
Eh, that's not a very generous reading of what he wrote. Exhibit A has a post at very high karma despite arguments that convinced its own author to drop support for it. That's not karma "working," either.
Replies from: mattnewport, wedrifid↑ comment by mattnewport · 2010-02-04T00:17:12.234Z · LW(p) · GW(p)
For some implicit definition of karma 'working' that is unclear. Absent a bug in the karma scoring code, a discrepancy between the karma scores you observe and the karma scores you think are warranted seems just as likely to be an inaccuracy in the observer's model of how karma is supposed to work as a problem with the karma system.
What the original post seems to be missing to me is an explanation of what scores the karma system should be producing for these posts, a justification for why that is what the karma system should be producing and ideally a suggestion for changes to either the implementation of the system or the way people allocate their votes that would produce the desired changes. Absent the above it look a lot like complaining that people aren't voting the way you think they ought to.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-04T00:47:48.198Z · LW(p) · GW(p)
Well, to start with I wanted to see if others agree that a problem exists here. If most people are satisfied with how karma is working in these cases, then there is not much point in me spending a lot of time writing out long explanations and justifications, and trying to find solutions. So at this stage, I'm basically saying "This looks wrong to me. What do you think?" I think I did give some explanations and justifications, but I accept that more are needed if an eventual change to the karma system is to be made.
Replies from: mattnewport, Unknowns↑ comment by mattnewport · 2010-02-04T00:51:09.450Z · LW(p) · GW(p)
Ok, as one data point, I don't see a particular problem here. The higher rated posts in your examples deserved higher ratings in my opinion. Karma mostly functions as I expect it to function.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-04T07:42:19.776Z · LW(p) · GW(p)
Thanks, but can you explain why you think people who post wrong arguments deserve to get more karma than those who correct the wrong arguments? Suppose I thought of uninverted's argument, but then realized that it's wrong, so I don't post my original argument, and instead correct him when he posts his. I end up with less karma than if I hadn't spent time thinking things through and realizing the flaw in my reasoning. Why do we want to discourage "less wrong" thinking in this way?
It seems to me that the way karma works now encourages people to think up arguments that support the majority view and then post them as soon as they can without thinking things through. Why is this good, or "expected"?
Replies from: mattnewport, wedrifid↑ comment by mattnewport · 2010-02-04T07:59:24.510Z · LW(p) · GW(p)
First, I think you're missing a karma pattern that I've noticed which is that the first post in a thread often gets more extreme votes (scores of greater absolute magnitude) than subsequent posts. I imagine this is because more people read the earlier posts in a thread and interest/readership drops off the deeper the nesting gets. I don't see any simple way to 'fix' that - it has the potential to be gamed but I don't think gaming the system in that respect is a major problem here.
Second I don't think karma strictly reflects 'correctness' of arguments, nor do I think it necessarily should. People award karma for attributes other than correctness. For example I imagine some of the upvotes on uninverted's "But I don't want to be a really big integer!" comment were drive-by upvotes for an amusing remark. Some of those upvoters won't have stayed for the followup discussion, others might have awarded more karma for pithy and amusing than accurate but dry. I think points-for-humour is as likely an explanation here as points-for-majority-opinion. Maybe you don't think karma should be awarded for attributes other than correctness. If so, go ahead and bring it up and see what the rest of the community thinks.
As a side note, I think you probably shouldn't have chosen a thread where you were a participant as an example. It gives the slight impression that your real complaint is that uninverted got more brownie points than you even though you were right and it's just not fair. If I didn't recognize your username as a regular and generally high-value contributor I might not have given you the benefit of the doubt on that.
Replies from: wedrifid, ciphergoth, Douglas_Knight, Wei_Dai↑ comment by wedrifid · 2010-02-04T14:55:30.705Z · LW(p) · GW(p)
As a side note, I think you probably shouldn't have chosen a thread where you were a participant as an example. It gives the slight impression that your real complaint is that uninverted got more brownie points than you even though you were right and it's just not fair.
I was given that impression somewhat but then on reflection I realized a more likely prompt for the Wei's frustration was the Nelson/Komponisto/Knox affair. Not wanting to bring that issue up yet again, he chose some other similar examples that didn't come with as much baggage. That one of them was his own was just unfortunate.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-04T15:19:37.424Z · LW(p) · GW(p)
Upvoted for the correct inference. This is definitely one of those rare times when laziness failed to pay off. :)
↑ comment by Paul Crowley (ciphergoth) · 2010-02-04T09:11:28.904Z · LW(p) · GW(p)
Median karma would de-emphasize number of voters and put greater emphasis on the score they assigned.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-04T17:01:26.217Z · LW(p) · GW(p)
That would presumably require a fairly different rating system, under the current system median karma would mean posts could only ever score -1, 0 or 1. That doesn't seem like an improvement to me.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-04T17:29:15.716Z · LW(p) · GW(p)
Yes; I imagine a range from 0/5 to 5/5 as per eg Amazon book rating sites. One problem with this however is that people don't use the whole range.
↑ comment by Douglas_Knight · 2010-02-04T23:31:00.928Z · LW(p) · GW(p)
the first post in a thread often gets more extreme votes (scores of greater absolute magnitude) than subsequent posts.
In that the main effect of karma on the reader is to sort posts, comparing scores at different levels of nesting is irrelevant. It is a very biased heuristic to read only comments at, say, karma > 5. I don't know if anyone uses this heuristic. A lot of people only read comments at karma > -4, which probably has a similar bias, but I wouldn't worry about it.
because more people read the earlier posts in a thread and interest/readership drops off the deeper the nesting gets.
People who post replies are serving a smaller audience than people who post higher level comments. It is possibly good that they are proportionately rewarded with karma.
↑ comment by Wei Dai (Wei_Dai) · 2010-02-04T09:18:32.592Z · LW(p) · GW(p)
I don't see any simple way to 'fix' that - it has the potential to be gamed but I don't think gaming the system in that respect is a major problem here
It's not so much a potential to be gamed, as encouraging people to post without thinking things through, as well as misleading readers as to which arguments are correct. I don't know if there is a simple fix or not, but if we can agree that it's a problem, then we can at least start thinking about possible solutions.
Maybe you don't think karma should be awarded for attributes other than correctness. If so, go ahead and bring it up and see what the rest of the community thinks.
In a case where a comment is both funny and incorrect, I think we should prioritize the correctness. After all, this is "Less Wrong", not "Less Bored".
If I didn't recognize your username as a regular and generally high-value contributor I might not have given you the benefit of the doubt on that.
I was too lazy to find another example, and counting on the benefit of the doubt. :)
ETA: Also, I think being upvoted for supporting the majority opinion is clearly a strong reason for what happened, especially in Exhibit B, where the comment is deep in the middle of a thread, and has no humor value.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-04T17:18:15.491Z · LW(p) · GW(p)
It's not so much a potential to be gamed, as encouraging people to post without thinking things through, as well as misleading readers as to which arguments are correct.
I'm not sure that's true. As I originally said, the first comment in a thread often gets karma of greater absolute magnitude. Bad posts get voted down more harshly as well as good posts getting voted up more. I think the higher readership for top level comments explains this. It means that from a karma-gaming perspective posting a top level comment is only a good move if you are confident it will be received positively.
In a case where a comment is both funny and incorrect, I think we should prioritize the correctness.
How about other attributes not directly related to correctness? How should niceness be judged relative to correctness for example?
I still think there's a conflict between you wanting people to give more upvotes to things that are correct but fewer to things that 'agree with the majority opinion'. I don't think people upvote because a comment 'agrees with the majority opinion', they upvote because a comment agrees with their opinion. That tends to produce greater upvotes for the majority view. In your second example I think the greater upvotes for James Andrix reflect the fact that he is more correct. Your real complaint seems to be that the majority opinion is wrong on this issue. The best way to fix that is to make a better argument for the other view, not to complain that people are failing to recognize correct arguments and upvote them.
↑ comment by wedrifid · 2010-02-04T09:56:11.609Z · LW(p) · GW(p)
Ok, as one data point, I don't see a particular problem here. The higher rated posts in your examples deserved higher ratings in my opinion. Karma mostly functions as I expect it to function.
Thanks, but can you explain why you think people who post wrong arguments deserve to get more karma than those who correct the wrong arguments?
Mattnewport did not claim or otherwise imply that he thought that.
↑ comment by Unknowns · 2010-02-04T08:04:34.133Z · LW(p) · GW(p)
I agree with you, but I think it has to do with the way people vote (mainly voting in favor of things they agree with and against things they disagree with), and with which comments are read by whom. In other words, changing the karma system probably is not a way to address it: people have to change their behavior.
Replies from: Wei_Dai, mattnewport↑ comment by Wei Dai (Wei_Dai) · 2010-02-04T08:11:12.435Z · LW(p) · GW(p)
Yes, I agree, and by "karma system" I meant to include how people think they should vote.
↑ comment by mattnewport · 2010-02-04T08:10:40.526Z · LW(p) · GW(p)
It seems a little inconsistent to expect people to vote up things for being correct but not to vote up things simply because they agree with them. I tend to agree with things I think are correct and disagree with things I think are incorrect.
Replies from: Unknowns↑ comment by wedrifid · 2010-02-04T09:46:32.204Z · LW(p) · GW(p)
If you look a little closer you see that 'the own author' was persuaded to concede that later comment in the argument and was then more generous and conciliatory than he perhaps needed to be. I would be extremely disappointed if the meta discussion here actually made the author retract his comment. What we have here is a demonstration of why it is usually status-enhancing to treat arguments as soldiers. If you don't, you're just giving the 'enemy' ammunition.
Willingness to concede weak points in a position is a rare trait and one that I like to encourage. This means I will never use 'look, he admitted he was wrong' as way to coerce people into down-voting them or shame those that don't.
EDIT: I mean status enhancing specifically not rational in general.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-04T10:15:17.144Z · LW(p) · GW(p)
Willingness to concede weak points in a position is a rare trait and one that I like to encourage. This means I will never use 'look, he admitted he was wrong' as way to coerce people into down-voting them or shame those that don't.
That's a very good point, and I've added a note to my opening comment to convey that I don't want people to down-vote these particular comments.
↑ comment by Wei Dai (Wei_Dai) · 2010-02-04T00:17:23.851Z · LW(p) · GW(p)
I think I should point out a problem with the karma system when I see it, and use evidence and arguments to back up my position and gather support and ideas for fixing the problem. I believe that's how a "community" works.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-04T00:18:40.015Z · LW(p) · GW(p)
and use evidence and arguments to back up my position
Sure, and that's what I felt was missing in the original post.
↑ comment by wedrifid · 2010-02-04T16:04:01.598Z · LW(p) · GW(p)
Nevermind A and B. I'm waiting to see how Exhibit C fares. 11 - 3 at the moment. 12 -15 is what I would expect if each comment received equal exposure. Perhaps discounting the reply a little because interest in the thread may have waned slightly.
↑ comment by wedrifid · 2010-02-04T09:32:54.957Z · LW(p) · GW(p)
Exhibit A. I gave a counterargument which convinced the author of that comment to change his mind, yet the original comment is still at 14.
Exhibit A has my vote because it is a reasonably insightful one liner, and a suitable response to the parent. Your reply to Exhibit A is a reducto ad absurdium that just does not follow.
I pointed out that accepting this premise would lead to indifference between wireheading and anti-wireheading.
Which is simply wrong. Please see this list of preferences which seem natural regarding positive and negative integers (and their wireheading counterparts). You haven't even expressed disagreement with any of those propositions that I expected to uncontroversial yet your whole 'karma anomalies' objection seems to hinge on it. I find this extremely rude.
Exhibit B. James Andrix's comment is at 20, while Toby Ord's counterargument is at 3. This issue is still confusing to me so I can't say for sure that Toby is right and James is wrong, but I think Toby has the stronger argument, and in any case I see no way that 20 to 3 is justified on the merits.
This is an excellent example of the karma system serving its purpose. James' post was voted up above 20 because it was fascinating. Toby got 5 votes for pointing out the limit to when that kind of math is applicable. He did not get my vote because his final paragraph about the bible/koran is distinctly muddled thinking.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-04T09:56:00.012Z · LW(p) · GW(p)
I find this extremely rude.
Yes, I was deliberately ignoring you because you were assuming agreement on things that I had already made clear that I don't agree with. It seemed to me that the discussion wasn't being productive because you weren't paying attention. If there is a way to set a disagreement status I would have done that, but apparently the accepted/expected way to end arguments here is to just stop talking.
Also, given your stated taste for "direct social competition", I've decided to not argue with you anymore in the future, since it doesn't seem to further my ends. Feel free to continue to reply to my comments or posts. I think you often make good points, but debating with you is just not fun. (I reserve the right to agree with you though. :)
Replies from: wedrifid, wedrifid↑ comment by wedrifid · 2010-02-04T11:06:17.812Z · LW(p) · GW(p)
Yes, I was deliberately ignoring you because you were assuming agreement on things that I had already made clear that I don't agree with.
I perhaps should have made myself more clear. One of the points should have been "You assume for the purposes of attempting an argument to absurdity and I similarly assume for the purposes of following through just what the implications are". It is the intuitive preferences over the states of the universe that I assumed would be shared by most. I also believed that they served to illustrate the bulk of your point.
↑ comment by wedrifid · 2010-02-04T11:00:06.967Z · LW(p) · GW(p)
Also, given your stated taste for "direct social competition"
That was probably an effective move. It seems like honesty is rarely a good policy. Things can always be taken out of context and used against you. In that case of trolls who are being belligerent and silly I do not mind mocking them. I consider such status games completely distinct from discussion. In fact, they are much more like playing lazer tag, another healthy place to turn off the brain and exercise competitive instincts. It is when such status games infect what is presented as 'intelligent discussion and debate' that I despise them, usually vocally. It frustrates me furiously when competition degrades conversation to logically rude debate.
I shall continue to agree with you when you make good points, disagree when you make meta level discussions like this one I shall continue to strongly object.
comment by MrHen · 2010-02-01T22:09:56.119Z · LW(p) · GW(p)
Another content opinion question: What and where is considered appropriate to discuss personal progress/changes/introspection regarding Rationality? I assume that LessWrong is not to be used for my personal Rationality diary.
The reason I ask is that the various threads discussing my beliefs seem to pick up some interest and they are very helpful to me personally.
I suppose the underlying question is this: If you had to choose topics for me to write about, what would they be? My specific religious beliefs have been requested by a few people, so that is given. Is there anything else? If I were to talk about my specific beliefs, what is the best way to do so?
Replies from: ciphergoth, AdeleneDawner, Blueberry↑ comment by Paul Crowley (ciphergoth) · 2010-02-01T22:19:23.715Z · LW(p) · GW(p)
You should definitely start a blog. I for one look forward to reading and commenting.
↑ comment by AdeleneDawner · 2010-02-01T22:15:40.175Z · LW(p) · GW(p)
I only have a very general feel for where that line is, so I can't help with that, but I would personally be interested in following such a diary. Perhaps you could start a blog?
↑ comment by Blueberry · 2010-02-01T22:25:41.669Z · LW(p) · GW(p)
Given what you've said so far about your personal situation, I think it's appropriate to discuss your personal progress and introspection regarding rationality on this site. I think a lot of us would find it helpful and interesting to see how your thought processes and beliefs change as you reexamine them.
I'm especially curious about more details regarding your personal situation, your past history of religious beliefs, and "Event X".
comment by CronoDAS · 2010-02-14T04:05:55.854Z · LW(p) · GW(p)
XKCD hits a home run with its Valentine's Day comic.
Replies from: wedrifid↑ comment by wedrifid · 2010-02-14T05:16:27.439Z · LW(p) · GW(p)
Given the alt text in particular I'd almost put this in the monthly quotes thread too. :)
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2010-02-14T05:22:29.756Z · LW(p) · GW(p)
You have been preempted.
Replies from: wedrifidcomment by MrHen · 2010-02-09T19:49:24.893Z · LW(p) · GW(p)
What is the correct term for the following distinction:
Scenario A: The fair coin has 50% chance to land heads.
Scenario B: The unfair coin has an unknown chance to land heads, so I assign it a 50% chance to get heads until I get more information.
If A flips up heads it won't change the 50%. If B flips up heads it will change the 50%. This makes Scenario A more [something] than Scenario B, but I don't know the right term.
Replies from: Rain↑ comment by Rain · 2010-02-11T19:01:26.272Z · LW(p) · GW(p)
Static? Unchanging? Complete (as far as definitions of the situation go)? Simple (as far as equations go - it lacks the dynamic variable representing the need to update)?
Replies from: MrHen↑ comment by MrHen · 2010-02-11T19:27:51.470Z · LW(p) · GW(p)
Thank you for responding! I was wondering if anyone ever would.
The best I could come up with was "Fixed" or "Confident." Your choices seem on par with those. Perhaps there is no technical term for this? I find that hard to believe.
Changing the original question slightly seems to be looking for a different but similar term:
Unfair coin A has been flipped 10^6 times and appears to be diverging on 60% in favor of HEADS
Unfair coin B has been flipped 10^1 times and appears to be diverging on 60% in favor of HEADS
If I flip coin A and it results in HEADS the estimation of 60% will move less than it would if I was flipping coin B. This makes coin A more [something] than coin B, but I don't know the right term.
Replies from: Rain, thomblake↑ comment by thomblake · 2010-02-11T19:36:10.879Z · LW(p) · GW(p)
This makes coin A more [something] than coin B
I'm pretty sure it makes your beliefs about coin A more [something] than coin B.
Replies from: MrHen↑ comment by MrHen · 2010-02-11T19:39:31.210Z · LW(p) · GW(p)
Okay, sure, I can deal with that. But I still need something to put in for [something]. :)
Replies from: thomblake↑ comment by thomblake · 2010-02-11T19:41:49.094Z · LW(p) · GW(p)
Left as an exercise for the reader.
Replies from: MrHencomment by byrnema · 2010-02-07T04:03:53.379Z · LW(p) · GW(p)
Daniel Varga wrote
In a universe where merging consciousnesses is just as routine as splitting them, the transhumans may have very different intuitions about what is ethical.
What I started wondering about when I began assimilating this idea of merging, copying and deleting identities, is what kind of legal/justice system could we depend upon if this was possible to enforce non-criminal behavior?
Right now we can threaten to punish people by restricting their freedom over a period of time that is significant with respect to the length of their lifetime. However, the whole equation might change if a would-be criminal thinks there's a p% chance they won't get caught, and a (1-p)% chance that one of their identities will have to go to jail...
Even a death penalty would be meaningless to someone who knows they could upload themselves to another vessel at any time. (If I had criminal intentions, I would upload myself just before the criminal act, so that the upload would be innocent.)
(I am posting this comment here because it is off-topic with respect to the thread, which was about whether we're in a simulation or not.)
Replies from: JGWeissman↑ comment by JGWeissman · 2010-02-07T04:34:52.813Z · LW(p) · GW(p)
In a world with an FAI Singleton, actions that would violate another individual's rights might be simply unavailable, making the concept of a legal/justice system obsolete.
In other scenarios, uploading/splitting would still take resources, which might be better used than in absorbing a criminal punishment. A legal/justice system could apply punishments to multiple instances of the criminal, and could be powerful enough to likely track them down.
If I had criminal intentions, I would upload myself just before the criminal act, so that the upload would be innocent
I am not convinced that the upload would be innocent. Maybe, if the upload was rolled back to before the criminal intentions. Any attempt by the upload to profit from the crime would definitely make it complicit.
Criminal punishment could also take the form of torture, effective if the would be criminal fears any of its instances being tortured, even if some are not.
comment by Paul Crowley (ciphergoth) · 2010-02-06T11:30:06.499Z · LW(p) · GW(p)
Measure your risk intelligence, a quiz in which you answer questions on a confidence scale from 0% to 100% and your calibration is displayed on a graph.
Obviously a linear probability scale is the Wrong Thing - if we were building it, we'd use a deciban scale and logarithmic scoring - but interesting all the same.
comment by pdf23ds · 2010-02-05T02:31:05.917Z · LW(p) · GW(p)
I may be stretching the openness of the thread a little here, but I have an interesting mechanical engineering hobbyist project, and I have no mechanical aptitude. I figure some people around here might, and this might be interesting to them.
The Avacore CoreControl is a neat little device, based on very simple mechanical principles, that lets you exercise for longer and harder than you otherwise could, by cooling down your blood directly. It pulls a slight vacuum on your hand, and directly applies ice to the palm. The vacuum counteracts the vasocontriction effect of cold and makes the ice effective.
I'm mainly interested in building one because I play a lot of DDR, but anyone who gets annoyed with how quickly they get hot during exercise could use one.
I called the company, and they sell the device for $3000 dollars (and they were very rude to me when I suggested making hobbyist plans available), but given the simplicity of the principles, it should be easy to build one using stuff from a hardware store for under $200. I have a post about it on my blog here.
comment by magfrump · 2010-02-03T18:53:26.607Z · LW(p) · GW(p)
We all know politics is the mind-killer, but it sometimes comes up anyway. Eliezer maintains that it is best to start with examples from other perspectives, but alas there is one example of current day politics which I do not know how to reframe: the health care debate.
As far as I can tell, almost every provision in the bill is popular, but the bill is not. This seems to be primarily because Republicans keep lying about it (I couldn't find a good link but there was a clip on the daily show of Obama saying "I can't find a reputable economist who agrees with what you're saying"(sic)).
When I see this, my mind stops. I think "people who disagree with my are lying scumbags or having the wool pulled over their eyes." Of course, this is probably not true.
Robin Hanson seems to think that it's good that the health care bill is not being passed, and I usually respect what he thinks a lot more than to accuse him of saying "my side wins!"
So I started to wonder, what am I missing?
The first explanation that came to my mind is not very good. I often think of libertarianism as starting from the idea of "don't patronize me." Phrased a little more maturely, it becomes "don't stop me from making deals I want to make." So assuming that most people want to force everyone to make a deal, how does this get resolved?
a) living in a democracy, the majority (of voters!) force their will on the minority--the majority patronizes and the government patronizes. b) politicians vie for their personal interests without regard to majority--the politicians patronize the people. c) something I haven't thought of (legacy for comments) d) opposition should block bills any way they can, even by exploiting poorly designed institutions--opposition should patronize the majority.
None of these seems reasonable or likely to me, but this is where my mind stops, and I don't want it to stop there.
EDIT: politics killed my mind halfway through the first draft.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-03T20:04:43.777Z · LW(p) · GW(p)
c)
comment by XiXiDu · 2010-02-02T18:25:36.921Z · LW(p) · GW(p)
Anyone willing to give some uneducated fool a little math coaching? I'm really just starting with math and I probably shouldn't already get into this stuff before reading up more, but it's really bothering me. I came across this page today: http://wiki.lesswrong.com/wiki/Prior_odds
My question, how do you get a likelihood ratio of 11:1 in favor of a diamond? I'm getting this: .88/(.88+7.92)=.1 thus 10% probability for a beep to be a box containing a diamond? Since the diamond-detector is 88% likely to beep on that 1 box and 8% likely to beep on the 99 boxes containing no diamonds. So you have 7.92 false beeps and .88 positive ones which add up to 8.8 beeps of which only .88 are actually boxes containing a diamond?
As of today I'm still struggling with basic algebra. So that might explain my confusion. Though at some point I'll arrive at probability. But I'd be really grateful if somebody could enlighten me now.
Thanks!
Replies from: MrHen, ciphergoth, Cyan↑ comment by MrHen · 2010-02-02T19:20:01.478Z · LW(p) · GW(p)
p(A|X) = p(X|A)*p(A) / ( p(X|A)*p(A) + p(X|~A)*p(~A) )
A = box has diamond
X = box beeped
p(A) = .01
p(X|A) = .88
p(X|~A) = .08
p(A|X) = .88 .01 / ( .88 .01 + .08 * .99)
p(A|X) = .0088 / (.0088 + .0792)
p(A|X) = .0088 / .088
p(A|X) = .1
This is different than the likelihood ratio:
LR = p(X|A) / p(X|~A)
LR = .88 / .08
LR = 11
The likelihood ratio can be worded as, "It is 11 times more likely to be a diamond when it beeps." The original formula answers the question, "What is the probability that this beep means a diamond?"
In other words, the likelihood ratio is starting with the contents of a box and asking whether that box is going to beep. p(A|X) is starting with a beep and trying to figure out what that beep means about the contents of the box.
Replies from: XiXiDu↑ comment by XiXiDu · 2010-02-02T20:22:43.259Z · LW(p) · GW(p)
Yes, thanks to Cyan's reply, I've already figured that "it is 11 times more likely to be a diamond when it beeps" than that the beep indicates a false positive. Your reply makes it even more obvious. My whole problem was my ignorance regarding the meaning of the likelihood ratio of testing a random box opposed to the overall probability of a beep. Or in other words, I was unaware that there were actually two different questions being estimated.
Thanks everybody!
Replies from: MrHen↑ comment by Paul Crowley (ciphergoth) · 2010-02-02T18:36:58.912Z · LW(p) · GW(p)
If you haven't read Bayes Theorem yet, it's definitely the place to start.
Replies from: XiXiDu↑ comment by Cyan · 2010-02-02T18:34:50.548Z · LW(p) · GW(p)
The likelihood ratio is Pr(beep | diamond) / Pr(beep | empty) = 0.88/0.08 = 11. I was going to say you ought to read the link for "likelihood ratio", but there's nothing there, so you should try the other wiki.
Also, don't think of running the detector over every box; think of testing one box at random.
comment by JamesAndrix · 2010-02-02T15:45:50.402Z · LW(p) · GW(p)
Would there be interest in a more general discussion forum for rationalists, or does one already exist? I think it would be useful to test the discussion of politics, religion, entertainment, and other topics without ruining lesswrong. It could attract a wider audience and encourage current lurkers to post.
Replies from: Upset_Nerd↑ comment by Upset_Nerd · 2010-02-03T04:12:43.503Z · LW(p) · GW(p)
I'm one of the lurkers that would really like to see such a discussion forum. Since a forums quality is almost solely decided by it's members a more general forum with the same user base as Less Wrong should easily be superior to most forums even on specialist topics. Maintaining the same high standards of discourse would probably be difficult though since I assume that the focus on rationalist topics here discourages non rationalists from participating, something which wouldn't be the case on a more general forum.
comment by nhamann · 2010-02-01T23:24:34.386Z · LW(p) · GW(p)
This is sort of off-topic for LW, but I recently came across a paper that discusses Reconfigurable Asynchronous Logic Automata, which appears to be a new model of computation inspired by physics. The paper claims that this model yields linear-time algorithms for both sorting and matrix multiplication, which seems fairly significant to me.
Unfortunately the paper is rather short, and I haven't been able to find much more information about it, but I did find this Google Tech Talks video in which Neil Gershenfeld discusses some motivations behind RALA.
Replies from: mkehrt, RobinZ↑ comment by mkehrt · 2010-02-02T01:26:30.917Z · LW(p) · GW(p)
A quick glance seems to indicate that they are achieving these linear time algorithms through massive parallelization. This is "cheating" because to do a linear-time sort of size n, you need O(n) processing units. While they seem to be arguing that this is acceptable because processing is becoming more and more parallel, this breaks down for large n. One can easily use traditional algorithms to sort a billion elements in O(n * log n); however for their algorithm to sort such a list in O(n) time, they need a billion (times some constant factor) times more processing units than to sort a list of size n.
I'm also vaguely perplexed by their basic argument. They want to have programming tools and computational models which are closer to the metal to take advantage of the features of new machines. This ignores the fact that the current abstractions exist, not just for historical reasons, but because they are easy to reason about.
This is all from a fairly cursory read of their paper, however, so take it with a grain of salt.
Replies from: pengvado↑ comment by pengvado · 2010-02-02T01:53:34.678Z · LW(p) · GW(p)
It takes O(n) memory units just to store a list of size n. Why should computers have asymptotically more memory units than processing units? You don't get to assume an infinitely parallel computer, but O(n)-parallel is only reasonable.
My first impression of the paper is: We can already do this, it's called an FPGA, and the reason we don't use them everywhere is that they're hard to program for.
↑ comment by RobinZ · 2010-02-02T02:17:07.562Z · LW(p) · GW(p)
Interesting. I would like to see a sanity check from someone knowledgeable in either electrical engineering or computer science; there are two things which concern me:
- Is it physically plausible?
- Is it usable?
Edit: I see the question is already answered.
comment by Hook · 2010-02-16T02:24:53.593Z · LW(p) · GW(p)
I've been reading Probability Theory by E.T. Jaynes and I find myself somewhat stuck on exercise 3.2. I've found ways to approach the problem that seem computationally intractable (at least by hand). It seems like there should be a better solution. Does anyone have a good solution to this exercise, or even better, know of collection of solutions to the exercises in the book?
At this point, if you have a complete solution, I'd certainly settle for vague hints and outlines if you didn't want to type the whole thing. Thanks.
Replies from: Morendil↑ comment by Morendil · 2010-02-16T08:21:33.618Z · LW(p) · GW(p)
Hint: you need to use the sum rule.
The computation is quite manageable for the case of k=5. For the general case, I too was left feeling dissatisfied with the expression I found, but on reflection I'm somewhat confident it is the correct answer.
The case k=4, Ni=13, m=5 is solved numerically on a Web site which discusses probability for Poker players, that was helpful in checking my results; the answer to 3.2 is a generalization of the results given there.
There does not appear to be a complete collection of solutions. This site comes closest. If I were you I would avoid looking at their solution for exercise 4.1 (I'm trying to forget what little I've seen of it as I'd like to solve 4.1 under my own power), but I would also not feel bad about giving up on 4.1 if you find it difficult.
I'd be happy to discuss Jaynes further over DMs or email - though I may respond at a slow pace, as I'm working through the book as my other activities allow. I'm on chapter 6 now.
Replies from: Hookcomment by [deleted] · 2010-02-12T03:15:32.537Z · LW(p) · GW(p)
Occasionally, I feel like grabbing or creating some sort of general proto-AI (like a neural net, or something) and trying to teach it as much as I can, the goal being for it to end up as intelligent as possible, and possibly even Friendly. I plan to undertake this effort entirely alone, if at all.
May I?
Replies from: orthonormal, Kevin, ciphergoth, whpearson, JGWeissman, thomblake↑ comment by orthonormal · 2010-02-15T00:51:31.603Z · LW(p) · GW(p)
I second Kevin: the nearest analogy that occurs to me is playing "kick the landmine" when the landmine is almost surely a dud.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-02-15T01:39:32.947Z · LW(p) · GW(p)
Of course, the advantage of "kick the landmine" is that you don't take the rest of the world out in case it wasn't a dud.
↑ comment by Kevin · 2010-02-12T04:37:56.703Z · LW(p) · GW(p)
I think Eliezer would say no (see http://lesswrong.com/lw/10g/lets_reimplement_eurisko/) but I think you're so astronomically unlikely to succeed that it doesn't matter.
↑ comment by Paul Crowley (ciphergoth) · 2010-02-12T10:31:04.134Z · LW(p) · GW(p)
What on Earth? When you say "may I" you presumably mean "is this a good idea" since obviously we're not in a position to stop you. But you're already aware of the arguments why it isn't a good idea and you don't address them here, so it's not clear that you have a good purpose for this comment in mind.
Replies from: byrnema↑ comment by byrnema · 2010-02-12T13:22:55.800Z · LW(p) · GW(p)
I interpreted as akin to a call to a suicide hot-line.
'This is sounding like a good idea...'
(Can you help / talk me out of it?)
If this is the case, we can probably give support. I certainly understand how curiosity can pull, and Warrigal may already be rationalizing that he probably won't make progress, and we can give advice that balances that. But then, is it true that Warrigal should be afraid of knowledge?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-12T14:05:48.379Z · LW(p) · GW(p)
I don't think it's fear of knowledge that leads me to suggest you don't try to build a catapult to twang yourself into a tree.
↑ comment by whpearson · 2010-02-12T10:55:49.071Z · LW(p) · GW(p)
Do you mean playing around with backprop? Or making your own algorithms.
Replies from: None↑ comment by [deleted] · 2010-02-13T00:49:31.304Z · LW(p) · GW(p)
Either.
Replies from: Eliezer_Yudkowsky, whpearson↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-15T04:10:17.062Z · LW(p) · GW(p)
If this is your state of knowledge then... how can I put this: it seems extremely likely that you'll start playing around with very simple tools, find out just how little they can do, and, if you're lucky, start reading up and rediscovering the world of AI.
↑ comment by JGWeissman · 2010-02-12T03:30:12.043Z · LW(p) · GW(p)
No.
What made you think you might get any other answer?
Replies from: Nonecomment by Douglas_Knight · 2010-02-11T19:49:52.781Z · LW(p) · GW(p)
I wonder if physicists would admit the effect of genealogy on their interpretation of QM?
People who ask physicists their interpretation of QM: next time, if the physicist admits controversy, ask about genealogy and other forms of epistemic luck.
Replies from: wnoise↑ comment by wnoise · 2010-02-12T05:37:13.659Z · LW(p) · GW(p)
I'm a grad student of quantum information. My advisor doesn't really talk much about interpretations, going only so far as to point out how silly the Bohmians are. That's largely true of most in this group, though one is an avowed "quantum Bayesian": probability as conceptualized by humans is simply the specialization to commuting variables, but we need non-commuting variables to deal with the world. The laws of quantum mechanics tell you how to update your information under time evolution.
My interpretation of QM was formed as an undergrad, with no direct professorial contact. It was based mostly on how arbitrary the placing of the classical-quantum divide in treatments is, so long as you place it so enough stuff is quantum. I took that seriously, bit the bullet, and so am an Everettian.
comment by MrHen · 2010-02-10T17:37:32.911Z · LW(p) · GW(p)
While reading old posts and looking for links to topics in upcoming drafts I have noticed that the Tags are severely underutilized. Is there a way to request a tag for a particular post?
Example: Counterfactual has one post and it isn't one of the heavy hitters on the subject.
Replies from: arundelo↑ comment by arundelo · 2010-02-10T23:36:23.905Z · LW(p) · GW(p)
If you have specific articles in mind to be tagged, I'm sure just asking their authors would be fine. If you click on someone's name to go to their user page, you'll see a "Send message" button (though I have never actually used this feature).
comment by underling · 2010-02-09T13:08:37.020Z · LW(p) · GW(p)
Hi LessWrongers,
I'm aware that Newcomb's problem has been discussed a lot around here. Nonetheless, I'm still surprised that 1-boxing seems to be the consensus view here, contrary to the concensus view. Can someone point to the relevant knockdown argument? (I found Newcomb's Problem and Regret of Rationality but the only argument therein seems to be that 1-boxers get what they want, and that's what makes 1-boxing rational. Now, getting what one wants seems to be neither necessary nor sufficient, because you should get it because of your rational choice, not because the predictor rigged the situation?!)
Many thanks for any links, corrections and help!
Replies from: Alicorn, wedrifid, byrnema, Kevin↑ comment by Alicorn · 2010-02-09T13:10:25.438Z · LW(p) · GW(p)
The predictor "rigged" the situation, it's true, but you have that information, and should take it into account when you decide which choice is rational.
Replies from: Furcas, underling↑ comment by Furcas · 2010-02-10T17:50:37.124Z · LW(p) · GW(p)
We also have the information that our decision won't affect what's in the boxes, and we should also take that into account.
The only thing that our decision determines is whether we'll get X or X+1000 dollars. It does not determine the value of X.
If X were determined by, say, flipping a coin, should a rational agent one-box or two-box? Two-box, obviously, because there's not a damn thing he can do to affect the value of X.
So why choose differently when X is determined by the kind of brain the agent has? When the time to make a decision comes, there still isn't a damn thing he can do to affect the value of X!
The only difference between the two scenarios above is that in the second one the thing that determines the value of X also happens to be the thing that determines the decision the agent will make. This creates the illusion that the decision determines X, but it doesn't.
Two-boxing is always the best decision. Why wouldn't it be? The agent will get a 1000 dollars more than he would have gotten otherwise. Of course, it would be even better to pre-commit to one-boxing, since this will indeed affect the kind of brain we have, which will in turn affect the value of X, but that decision is outside the scope of Newcomb's problem.
Still, if the agent had pre-commited to one-boxing, shouldn't he two-box once he's on the spot? That's a wrong question. If he really pre-commited to one-boxing, he won't be able to choose differently. No, that's not quite right. If the agent really pre-commited to one-boxing, he won't even have to make the decision to stick to his previous decision. With or without pre-commitment, there is only one decision to be made, though at different times. If you have a Newcombian decision to make, you should always two-box, but if you pre-commmited you won't have a Newcombian decision to make in Newcomb's problem; actually, for that reason, it won't really be Newcomb's problem... or a problem of any kind, for that matter.
↑ comment by underling · 2010-02-10T14:57:40.125Z · LW(p) · GW(p)
Right, but exactly this information seems to the 2-boxer to point to 2-boxing! If the game is rigged against you, so what? Take both boxes. You cannot lose, and there's a small chance the conman erred.
Mhm. I'm still far from convinced. Is this my fault? Am I at all right in assuming that 1-boxing is heavily favored in this community? And that this is a minority belief among experts?
Replies from: Alicorn, byrnema↑ comment by Alicorn · 2010-02-10T14:59:07.346Z · LW(p) · GW(p)
Perhaps it will make sense if you view the argument as more of a reason to be the kind of person who one-boxes, rather than an argument to one-box per se.
Replies from: underling↑ comment by underling · 2010-02-10T16:20:22.839Z · LW(p) · GW(p)
That's too cryptic for me. Where's the connection to your first comment?
As i said in reply to byrnema, I don't dispute that wanting to be the kind of person who 1-boxes in iterated games or in advance is rational, but one-shot? I don't see it. What's the rationale behind it?
Replies from: Alicorn, MrHen↑ comment by Alicorn · 2010-02-10T16:33:53.417Z · LW(p) · GW(p)
You have the information that in Newcomblike problems, it is better to (already) be inclined to predictably one-box, because the game is "rigged". So, if you (now) become predictably and generally inclined to one-box, you can win at Newcomblike problems if you encounter them in the future. Even if you only ever run into one.
Of course, Omega is imaginary, so it's entirely a thought experiment, but it's interesting anyway!
Replies from: underling↑ comment by underling · 2010-02-10T16:42:40.535Z · LW(p) · GW(p)
Agree completely.
But the crucial difference is: in the one-shot case, the box is already filled or not.
Replies from: Alicorn↑ comment by Alicorn · 2010-02-10T16:44:46.846Z · LW(p) · GW(p)
Yes. But it was filled, or not, based on a prediction about what you would do. We are not such tricksy creatures that we can unpredictably change our minds at the last minute and two-box without Omega anticipating this, so the best way to make sure the one box has the goodies in it is to plan to actually take only that box.
Replies from: brazil84, underling↑ comment by brazil84 · 2010-02-13T02:07:01.852Z · LW(p) · GW(p)
I agree. I would add that situations can and do arise in real life where the other fellow can predict your behavior better than you can predict it yourself.
For example, suppose that your wife announces she is going on a health kick. She is joining a gym; she will go 4 or 5 times a week; she will eat healthy; and she plans to get back into the shape she was in 10 years ago. You might ask her what she thinks her probability of success is, and she might honestly tell you she thinks there is a 60 or 70% chance her health kick will succeed.
On the other hand, you, her husband know her pretty well and know that she has a hard time sticking to diets and such. You estimate her probability of success at no more than 10%.
Whose probability estimate is better? I would guess it's the husband's.
Well, in the Newcomb experiment, the AI is like the husband who knows you better than you know yourself. Trying to outguess and/or surprise such an entity is a huge uphill battle. So, even if you don't believe in backwards-causality, you should probably choose as if backwards causality exists.
JMHO
Replies from: Alicorn↑ comment by Alicorn · 2010-02-13T02:08:32.001Z · LW(p) · GW(p)
you, her husband
I do not anticipate ever becoming someone's husband.
Replies from: brazil84, Clippy↑ comment by Clippy · 2010-02-13T03:23:10.354Z · LW(p) · GW(p)
Neither do I. That would be stupid. Why would anyone ever want to become anyone's husband?
Replies from: Kevin↑ comment by Kevin · 2010-02-13T05:33:45.867Z · LW(p) · GW(p)
Maybe your wife-to-be is a wealthy heiress?
Replies from: Unknowns↑ comment by Unknowns · 2010-02-13T06:57:09.169Z · LW(p) · GW(p)
I think Clippy's point was that becoming a husband doesn't generate paperclips.
Replies from: Kevin↑ comment by Kevin · 2010-02-13T10:37:30.808Z · LW(p) · GW(p)
Oh, is Clippy a Less Wrong version of a troll account? That's kind of cute.
Replies from: Blueberry, Clippy↑ comment by Clippy · 2010-02-14T00:44:49.693Z · LW(p) · GW(p)
You ask a dumb, naive question, and I'm the troll? I'm cute?
Tip: To send an email in Outlook, press ctrl+enter.
Replies from: Jack↑ comment by Jack · 2010-02-14T01:53:43.572Z · LW(p) · GW(p)
So do your values both include maximizing paper clips and helping people use Microsoft Office products? How exactly do you decide which to spend your time on? How do you deal with trade offs?
Replies from: Clippy, Alicorn↑ comment by Clippy · 2010-02-16T18:21:44.121Z · LW(p) · GW(p)
There is no conflict between helping people with Office and making paperclips. Why would you think there is? Better Office users means better tools for making paperclips, and more paperclips gives people more reasons to use Office.
Did you find this answer helpful?
Tip: Press F1 for help.
↑ comment by Alicorn · 2010-02-14T01:54:42.516Z · LW(p) · GW(p)
And: If presented with the chance to turn all copies of the hardware on which Microsoft Office products are stored and run into paperclips instead, would you do it?
Replies from: Jordan↑ comment by Jordan · 2010-02-14T02:06:55.221Z · LW(p) · GW(p)
Perhaps the 'paper clips' Clippy is trying to maximize are the anthropomorphic paper clips embodied in Microsoft Office. This would explain Clippy's helpful hints: to convince us all of the usefulness of Microsoft Office, thus encouraging us to run that program.
If this is the case, we face a fate worse than paper clip tiling.... Microsoft software tiling.
↑ comment by underling · 2010-02-11T08:58:04.204Z · LW(p) · GW(p)
so the best way to make sure the one box has the goodies in it is to plan to actually take only that box.
If we rule out backwards causation, then why on earth should this be true???
Replies from: Jordan, Kevin↑ comment by Jordan · 2010-02-11T09:43:38.855Z · LW(p) · GW(p)
Imagine a simple but related scenario that involves no backwards causation:
You're a 12 year old kid, and you know your mom doesn't want you to play with your new Splogomax unless an adult is with you. Your mom leaves you alone for an hour to run to the store, telling you she'll punish you if you play with the Splogomax, and that, whether there's any evidence of it when she returns, she knows you well enough to know if you're going to play with it, although she'll refrain from passing judgement until she has just gotten back from the store.
Assuming you fear punishment more than you enjoy playing with your Splogomax, do you decide to play or not?
Edit: now I feel stupid. There's a much simpler way to get my point across. Just imagine Omega doesn't fill any box until after you've picked up one or two boxes and walked away, but that he doesn't look at your choice when filling the boxes.
Replies from: underling↑ comment by underling · 2010-02-11T12:43:17.708Z · LW(p) · GW(p)
So what is your point? That no backwards causation is involved is assumed in both cases. If this scenario is for dialectic purposes, it fails: It is equally clear, if not clearer, that my actual choice has no effect on the content of the boxes.
For what it's worth, let me reply with my own story:
Omega puts the two boxes in front of you, and says the usual. Just as you’re about to pick, I come along, grab both boxes, and run. I do this every time Omega confronts someone with his boxes, and I always do as good as a two-boxer and better than a one-boxer. You have the same choice as me: Just two-box. Why won’t you?
Replies from: Cyan↑ comment by Cyan · 2010-02-11T14:20:20.428Z · LW(p) · GW(p)
You have the same choice as me...
If Omega fills the boxes according to its prediction of the choice of the person being offered the boxes and not the person who ends up with the boxes, then the above statement where your argument breaks down.
Replies from: underling↑ comment by underling · 2010-02-11T14:36:46.208Z · LW(p) · GW(p)
You have the same choice as me: Take one box or both. (Or, if you assume there are no choices in this possible world because of determinism: It would be rational to 2-box, because I, the thief, do 2-box, and my strategy is dominant)
Replies from: Cyan↑ comment by Cyan · 2010-02-11T14:45:26.918Z · LW(p) · GW(p)
It's better for the thief to two-box because it isn't the thief's decision algorithm that determined the contents of the boxes.
Replies from: underling↑ comment by underling · 2010-02-11T15:09:07.471Z · LW(p) · GW(p)
Is it not rather Omega's undisclosed method that determines the contens? That seems to make all the difference.
Replies from: Cyan↑ comment by Cyan · 2010-02-11T15:17:09.379Z · LW(p) · GW(p)
No. The method's output depends on its input, which by hypothesis is a specification of the situation that includes all the information necessary to determine the output of the individual's decision algorithm. Hence the decision algorithm is a causal antecedant of the contents of the boxes.
Replies from: underling↑ comment by underling · 2010-02-11T15:41:31.790Z · LW(p) · GW(p)
I mean, the actual token, the action, the choice, the act of my choosing does not determine the contents. It's Omega's belief (however obtained) that this algorithm is such-and-such that lead it to fill the boxes accordingly.
Replies from: AndyWood, Cyan↑ comment by AndyWood · 2010-02-11T16:40:39.333Z · LW(p) · GW(p)
That is right - the choice does not determine the contents. But the choice is not as independent as common intuition suggests. Omega's belief and your choice share common causes. Human decisions are caused - they don't spontaneously spring from nowhere, causally unconnected to the rest of the universe - even if that's how it sometimes feels from the inside. The situational state, and the state of your brain going into the situation, determine the decision that your brain will ultimately produce. Omega is presumed to know enough about these prior states, and how you function, to know what you will decide. Omega may well know better than you do what decision you will reach! It's important to realize that this is not that far-fetched. Heck, that very thing sometimes happens between people who know each other very well, without the benefit of one of them being Omega! Your objection supposes that somehow, everything in the world, including your brain, could be configured so as to lead to a one-box decision; but then at the last moment, you could somehow pull a head-fake and just spontaneously spawn a trancendent decision-process that decides to two box. It might feel to you intuitively that humans can do this, but as far as we know they do not in fact possess that degree of freedom.
To summarize, Omega's prediction and your decision have common, ancestor causes. Human decision-making feels transcendent from the inside, but is not literally so. Resist thinking of first-person choosing as some kind of prime mover.
↑ comment by Cyan · 2010-02-11T15:45:24.339Z · LW(p) · GW(p)
Yes, that's true. Now chase "however obtained" up a level -- after all, you have all the information necessary to do so.
Replies from: underling↑ comment by underling · 2010-02-11T15:57:49.494Z · LW(p) · GW(p)
What do you mean? It could have created and run a copy, for instance, but anyhow, there would be no causal link. That's probably the whole point of the 2-Boxer-majority.
I can see a rationale behind one-boxing, and it might even be a standoff, but why almost no one here seems to see the point of 2-boxing, and the amazing overconfidence is beyond me.
Replies from: Cyan↑ comment by Cyan · 2010-02-11T16:07:20.315Z · LW(p) · GW(p)
What do you mean?
I mean that as part of the specification of the problem, Omega has all the information necessary to determine what you will choose before you know yourself. There are causal arrows that descend from the situation specified by that information to (i) your choice, and (ii) the contents of the box.
why almost no one here seems to see the point of 2-boxing, and the amazing overconfidence is beyond me.
You stated that "the game is rigged". The reasoning behind 2-boxing ignores that fact. In common parlance, a rigged game is unwinnable, but this game is knowably winnable. So go ahead and win without worrying about whether the choice has the label "rational" attached!
Replies from: underling↑ comment by underling · 2010-02-11T16:24:25.709Z · LW(p) · GW(p)
Sadly, we seem to make no progress in any direction. Thanks for trying.
Replies from: Cyan↑ comment by Cyan · 2010-02-11T16:25:51.082Z · LW(p) · GW(p)
Likewise.
Replies from: MrHen↑ comment by MrHen · 2010-02-11T16:42:54.720Z · LW(p) · GW(p)
Yeah, I gotta give you both props for sticking it out that long. The annoying part for me is that I see both sides just fine and can see where the conceptual miss keeps happening.
Alas, that doesn't mean I can clarify anything better than you did.
↑ comment by MrHen · 2010-02-10T16:35:43.751Z · LW(p) · GW(p)
The one-shot game still has all of the information for the money in the boxes. If you walked in and picked both boxes you wouldn't be surprised by the result. If you walked in and picked one box you wouldn't be surprised by the result. Picking one box nets more money, so pick one box.
Replies from: underling↑ comment by underling · 2010-02-10T16:44:20.481Z · LW(p) · GW(p)
I deny that 1-boxing nets more money - ceteris paribus.
Replies from: thomblake, Alicorn↑ comment by thomblake · 2010-02-10T16:49:15.837Z · LW(p) · GW(p)
I deny that 1-boxing nets more money - ceteris paribus.
Then you're simply disagreeing with the problem statement. If you 1-box, you get $1M. If you 2-box, you get $1k. If you 2-box because you're considering the impossible possible worlds where you get $1.001M or $0, you still get $1k.
At this point, I no longer think you're adding anything new to the discussion.
Replies from: underling↑ comment by underling · 2010-02-11T08:37:48.895Z · LW(p) · GW(p)
I never said I could add anything new to the discussion. The problem is: judging by the comments so far, nobody here can, either. And since most experts outside this community agree on 2-boxing (ore am I wrong about this?), my original question stands.
↑ comment by byrnema · 2010-02-10T16:24:09.168Z · LW(p) · GW(p)
If the game is rigged against you, so what? Take both boxes. You cannot lose, and there's a small chance the conman erred.
What helps me when I get stuck in this loop (the loop isn't incorrect exactly, it's just non-productive) is to meditate on how the problem assumes that, for all my complexity, I'm still a deterministic machine. Omega can read my source code and know what I'm going to pick. If I end up picking both boxes, he knew that before I did, and I'll end up with less money. If I can convince myself -- somehow -- to pick just the one box, then Omega will have seen that coming too and will reward me with the bonus. So the question becomes, can your source code output the decision to one-box?
The answer in humans is 'yes' -- any human can learn to output 1-box -- but it depends sensitively upon how much time the human has to think about it, to what extent they've been exposed to the problem before, and what arguments they've heard. Given all these parameters, Omega can deduce what they will decide.
Am I at all right in assuming that 1-boxing is heavily favored in this community?
These factors have come together (time + exposure to the right arguments, etc.) on Less Wrong so that people who hang out at Less Wrong have been conditioned to 1-box. (And are thus conditioned to win in this dilemma.)
Replies from: underling↑ comment by underling · 2010-02-10T16:40:18.971Z · LW(p) · GW(p)
I agree with everything you say in this comment, and still find 2-boxing rational. The reason still seems to be: you can consistently win without being rational.
Replies from: byrnema↑ comment by byrnema · 2010-02-10T16:55:43.411Z · LW(p) · GW(p)
By rational, I think you mean logical. (We tend to define 'rational' as 'winning' around here.*)
... and -- given a certain set of assumptions -- it is absolutely logical that (a) Omega has already made his prediction, (b) the stuff is already in the boxes, (c) you can only maximize your payoff by choosing both boxes. (This is what I meant by this line of reasoning isn't incorrect, it's just unproductive in finding the solution to this dilemma.)
But consider what other logical assumptions have already snuck into the logic above. We're not familiar with outcomes that depend upon our decision algorithm, we're not used to optimizing over this action. The productive direction to think along is this one: unlike a typical situation, the content of the boxes depends upon your algorithm that outputs the choice, only indirectly on your choice.
You're halfway to the solution of this problem if you can see both ways of thinking about the problem as reasonable. You'll feel some frustration that you can alternate between them -- like flip-flopping between different interpretations of an optical illusion -- and they're contradictory. Then the second half of the solution is to notice that you can choose which way to think about the problem as a willful choice -- make the choice that results in the win. That is the rational (and logical) thing to do.
Let me know if you don't agree with the part where you're supposed to see both ways of thinking about the problem as reasonable.
* But the distinction doesn't really matter because we haven't found any cases where rational and logical aren't the same thing.
Replies from: underling↑ comment by underling · 2010-02-11T08:54:41.241Z · LW(p) · GW(p)
May I suggest again that defining rational as winning may be the problem?
Replies from: byrnema, byrnema↑ comment by byrnema · 2010-02-13T00:17:50.604Z · LW(p) · GW(p)
(2nd reply)
I'm beginning to come around to your point of view. Omega rewards you for being illogical.
.... It's just logical to allow him to do so.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-13T00:22:33.892Z · LW(p) · GW(p)
This is why I find it incomprehensible that anyone can really be mystified by the one-boxer's position. I want to say "Look, I've got a million dollars! You've got a thousand dollars! And you have to admit that you could have seen this coming all along. Now tell me who had the right decision procedure?"
↑ comment by byrnema · 2010-02-11T12:54:16.610Z · LW(p) · GW(p)
My point of view is that the winning thing to do here and the logical thing to do are the same.
If you want to understand my point of view or if you want me to understand your point of view, you need to tell me where you think logical and winning diverge. Then I tell you why I think they don't, etc.
You've mentioned 'backwards causality' which isn't assumed in our one-box solution to Newcomb. How comfortable are you with the assumption of determinism? (If you're not, how do you reconcile that Omega is a perfect predictor?)
Replies from: underling↑ comment by underling · 2010-02-11T14:09:01.813Z · LW(p) · GW(p)
You've mentioned 'backwards causality' which isn't assumed in our one-box solution to Newcomb.
Only to rule it out as a solution. No problem here.
How comfortable are you with the assumption of determinism?
In general, very. Concerning Newcomb, I don't think it's essential, and as far as I recall, it isn't mentioned in the orginal problem.
you need to tell me where you think logical and winning diverge
I'll try again: I think you can show with simple counterexamples that winning is neither necessary nor sufficient for being logical (your term for my rational, if I understand you correctly).
Here we go: it's not necessary, because you can be unlucky. Your strategy might be best, but you might lose as soon as luck is involved. It's not sufficient, because you can be lucky. You can win a game even if you're not perfectly rational.
1-boxing seems a variant of the second case, instead of (bad) luck the game is rigged.
Replies from: Cyan↑ comment by wedrifid · 2010-02-11T04:58:43.801Z · LW(p) · GW(p)
I'm still surprised that 1-boxing seems to be the consensus view here, contrary to the concensus view. Can someone point to the relevant knockdown argument?
- If you One Box you get $1,000,000
- If you Two Box you get $10,000
Therefore, One Box.
The rest is just details. If it so happens that those 'details' tell you to only get the $10,000 then you have the details wrong.
↑ comment by byrnema · 2010-02-09T13:21:40.917Z · LW(p) · GW(p)
I don't know what the consensus knock-down argument is, but this is how mine goes:
Usually, we optimize over our action choices to select the best outcome. (We can pick the blue box or the red box, and we pick the red box because it has the diamond.) Omega contrives a situation in which we must optimize over our decision algorithm for the best outcome. Choose over your decision algorithms (the decision algorithm to one-box, or the decision algorithm to two-box), just as you would choose among actions. You realize this is possible when you realize that choosing a decision algorithm is also an action.
(Later edit: I anticipated what might be most confusing about calling the decision algorithm an 'action' and have decided to add that the decision algorithm is an action that is not completed until you actually one box or two box. Your decision algorithm choice is 'unstable' until you have actually made your box choice. You "choose" the decision algorithm that one-boxes by one-boxing.)
Replies from: underling↑ comment by underling · 2010-02-10T14:47:38.503Z · LW(p) · GW(p)
If the solution were just to see that optimizing our decision algorithm is the right thing to do, the crucial difference between the original problem and the variant, where Omega tells you he will play this game with you some time in the future, seems to disappear. Hardly anyone denies 1-boxing is the rational choice in the latter case. There must be more to this.
Replies from: byrnema↑ comment by byrnema · 2010-02-10T16:08:23.990Z · LW(p) · GW(p)
I don't see a contradiction, just based on what you've written. (If a crucial difference disappears, then maybe it wasn't that crucial? Especially if the answer is the same, it's OK if the problems turn out to actually be more similar than you thought.) Could you clarify how you conclude that there there must be more to the problem?
Replies from: underling↑ comment by underling · 2010-02-10T16:32:49.505Z · LW(p) · GW(p)
My thinking goes like this: The difference is that you can make a difference. In the advance- or iterated case, you can causally influence your future behaviour, and so the prediction, too . In the original case, you cannot (where backwards causation is forbidden on pain of triviality). Of course that's the oldest reply. But it must be countered, and I don't see it.
Replies from: thomblake↑ comment by thomblake · 2010-02-10T16:46:27.748Z · LW(p) · GW(p)
My thinking goes like this: The difference is that you can make a difference. In the advance- or iterated case, you can causally influence your future behaviour, and so the prediction, too . In the original case, you cannot (where backwards causation is forbidden on pain of triviality). Of course that's the oldest reply. But it must be countered, and I don't see it.
Why can't you influence your future behavior in the original case? When you're trying to optimize your decision algorithm ('be rational'), you can consider Newcomblike cases even if Omega didn't actually talk to you yet. And so before you're actually given the choice, you decide that if you ever are in this sort of situation, you should one-box.
I'm sympathetic to some two-boxing arguments, but once you grant that one-boxing is the rational choice when you knew about the game in advance, you've given up the game (since you do actually know about the game in advance).
Replies from: byrnema↑ comment by byrnema · 2010-02-10T18:17:13.675Z · LW(p) · GW(p)
Alas, this comment really muddies the waters. It leads to Furcas writing something like this:
Of course, it would be even better to pre-commit to one-boxing, since this will indeed affect the kind of brain we have.
Underling asks: if the content of the boxes has already been decided, how can you retroactively effect the content of the boxes?
The problem with what you've written, thomblake, is that you seem to agree with Underling that he can't retroactively change the content of the boxes and thus suggest that the content of the boxes has already been determined by past events, such as whether he has been exposed to these problems before and has pre-committed. (This is only vapidly true to the extent that everything is determined by past events.)
Suppose that Underling has never thought of the Newcomb problem before. The content of the boxes still depends upon what he decides, and his decision is a 'choice' just as much as any choice a person ever makes: he can decide which box to pick. And his decision algorithm, which he chooses, will decide the contents of the box.
Explaining why this isn't a problem with causality requires pointing to the determinism of the system. While Underling has a choice of decision algorithms, his choice has already been determined and affects the contents of the box.
If the universe is not deterministic, this problem violates causality.
↑ comment by Kevin · 2010-02-09T13:15:06.882Z · LW(p) · GW(p)
I think it comes down to the belief that one can choose to live life based on a personal utility function.
If you choose a utility function that one boxes in advance of Omega coming, you win the million dollars. Why pick a utility function that loses at Newcomb's Problem?
comment by Nic_Smith · 2010-02-06T06:26:08.242Z · LW(p) · GW(p)
I just read Outliers and I'm curious -- is there anything that would have taken 10000 hours in the EEA that would support Gladwell's "rule"? Is there anything else in neurology/our understanding of the brain that would make the idea that this is the amount of practice that's needed to succeed in something make sense?
Replies from: Kevin↑ comment by Kevin · 2010-02-06T07:28:57.165Z · LW(p) · GW(p)
Something to understand about Malcolm Gladwell is that he is an exceptionally talented writer that can turn a pseudo-theory into hundreds of pages of pleasant, entertaining non-fiction writing. He's not an evolutionary psychologist, though I bet he could write a really interesting and thought provoking non-fiction piece on evolutionary psychology.
http://en.wikipedia.org/wiki/The_Tipping_Point#The_three_rules_of_epidemics
His pseudo-theory from The Tipping Point has not made advertisers any more money. It's an example of something that really does sound kind of true when you read it, but what he says doesn't explain much in the way of meaningful phenomena. Advertising companies tried to take advantage of his pseudo-theory of social influence, and they still make some efforts to target influential users, but it's a token effort compared towards marketing as broadly as possible. Superbowl advertisements still work.
Replies from: Nic_Smith↑ comment by Nic_Smith · 2010-02-06T19:37:18.105Z · LW(p) · GW(p)
Oh, by no means did I want to suggest that Gladwell has a forte in evolutionary psychology; if he does, there's nothing to indicate it in what I've read. It's clear that he glosses over many of the details in his work, perhaps dangerously so. And the entire point of Outliers is that social environment is important to success; not exactly an earth-shattering insight, there's a negative Times review that's spot on.
That said, Gladwell says he originally got the idea for 10000 hours from Ericsson and Levitin. At worst, at this point, I think it's somewhat plausible. I still have a lot more searching to do on the subject, but I am interested in what evolutionary psychology might say about the idea -- alas, I'm also not a evolutionary psychologist, so I don't know that either.
Edit: Of course, what I'm really interested in is "Is the idea that it takes 10000 hours to master a skill set true in enough circumstances to make it a useful guideline?" I'm not interested in the viewpoint of evolutionary psychologists on skill acquisition per se.
Replies from: wedrifid↑ comment by wedrifid · 2010-03-04T01:44:32.921Z · LW(p) · GW(p)
Edit: Of course, what I'm really interested in is "Is the idea that it takes 10000 hours to master a skill set true in enough circumstances to make it a useful guideline?"
The '10000' hours approximation seems surprisingly well founded, based on the research that Ericsson et. al. reviewed in their works. Obviously this is to obtain 'expert' level performance and you can still get 'good enough' levels from far less time. Also note that they specify that many of the hours must be deliberate practice and not just performance.
comment by Kevin · 2010-02-05T20:28:46.177Z · LW(p) · GW(p)
Graphene transistors promise 100GHz speeds
http://arstechnica.com/science/2010/02/graphene-fets-promise-100-ghz-operation.ars
100-GHz Transistors from Wafer-Scale Epitaxial Graphene
Replies from: None, thomblake↑ comment by [deleted] · 2010-02-08T17:55:16.787Z · LW(p) · GW(p)
I find that article title misleading. Having transistors that operate at 100 GHz does not give you a CPU with a clock rate of 100 GHz. If I remember correctly, that very article states that current transistors operate at 30 GHz.
Replies from: Kevin↑ comment by Kevin · 2010-02-08T23:16:25.417Z · LW(p) · GW(p)
Sure, this is discussed in more detail on Hacker News. http://news.ycombinator.com/item?id=1104461
↑ comment by thomblake · 2010-02-05T20:40:45.622Z · LW(p) · GW(p)
Is this sort of thing on-topic, even for the Open Thread here?
ETA: This question is not merely rhetorical.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-05T20:44:42.363Z · LW(p) · GW(p)
To the extent that FAI will depend on the continued exponential growth of computing capacity, I'd say yes.
Replies from: thomblake, Zack_M_Davis, JGWeissman↑ comment by thomblake · 2010-02-05T20:54:10.123Z · LW(p) · GW(p)
I've always thought FAI was only tangentially on-topic here (more of a mutual interest than anything). This community is explicitly about rationality.
Replies from: Kevin↑ comment by Kevin · 2010-02-05T20:58:31.086Z · LW(p) · GW(p)
That's the umbrella topic, but I do not think that topic is in any way meant to exclude science. I mean... it's science. How many thousands of words has Eliezer written on quantum physics?
Surely there are worse things that could happen to a community of rationalists than links to scientific discoveries of strong mutual interest. It's not even a slippery slope towards bad off-topic stuff.
Edit: And I'm going to continue mostly contextless link sharing in the Open Thread until a link sharing subreddit is enabled.
Replies from: thomblake↑ comment by thomblake · 2010-02-05T21:20:10.682Z · LW(p) · GW(p)
It's not even a slippery slope towards bad off-topic stuff.
I rather disagree. There are plenty of places online to find links to interesting scientific discoveries. And the sense in which Eliezer wrote about quantum physics is entirely different from the sense in which these links were "about science".
That said, I didn't mean to suggest in my question that the comment was off-topic, but rather wanted to know what folks thought about it.
↑ comment by Zack_M_Davis · 2010-02-05T21:21:08.681Z · LW(p) · GW(p)
Are you sure you don't mean uFAI? Friendliness isn't a hardware problem.
Replies from: mattnewport↑ comment by mattnewport · 2010-02-05T21:28:55.844Z · LW(p) · GW(p)
Maybe I should just have said AI, or AGI. I suspect we will need further advances in computing power to achieve greater than human intelligence, friendly or otherwise.
Replies from: DWCrmcm↑ comment by JGWeissman · 2010-02-05T21:29:25.117Z · LW(p) · GW(p)
I would not be surprised if the initial seed of an FAI could be implemented on current technology available to consumers. The missing part is understanding, not computing power.
On the other hand, increased available computing power makes it easier for someone without real understanding to stumble on unfriendly AGI through brute force searches of design space.
comment by pdf23ds · 2010-02-05T02:41:26.224Z · LW(p) · GW(p)
Here's another one. When reading wikipedia on Chaitin's constant, I came across an article by Chaitin from 1956 (EDIT: oops, it's 2006) about the consequences of the constant (and its uncomputability) on the philosophy of math, that seems to me to just be completely wrongheaded, but for reasons I can't put my finger on. It really strikes the same chords in me that a lot of inflated talk about Godel's Second Incompleteness theorem strikes. (And indeed, as is obligatory, he mentions that too.) I searched on the title but didn't find any refutations. I wonder if anyone here has any comments on it.
comment by byrnema · 2010-02-04T01:57:25.926Z · LW(p) · GW(p)
What probability do you assign for it being possible to send information backwards in time, over any time scale?
Replies from: wedrifid, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-02-04T17:19:06.541Z · LW(p) · GW(p)
We already know (theoretically) how to send logical information backwards in time, as in Gary Drescher. The other kinds of info-time-travel are probably conceptually inconsistent.
Replies from: Cyan↑ comment by Cyan · 2010-02-04T17:31:42.149Z · LW(p) · GW(p)
Is there a quick and easy way to understand "how to send logical information backwards in time" that doesn't involve watching a 30 min video?
Replies from: GuySrinivasan, Vladimir_Nesov, byrnema↑ comment by SarahSrinivasan (GuySrinivasan) · 2010-02-04T17:52:10.927Z · LW(p) · GW(p)
Suppose that the universe is the deterministically evolving wavefunction, and that it makes sense to talk about causing a rock to be moved from here to there. Then you can cause a timeful universe-slice 100 years ago to be the sort of thing which will deterministically evolve until after 100 years the measure of a rock being moved from here to there is greater than it would have been had you not caused the rock to move.
Replies from: Cyan↑ comment by Cyan · 2010-02-04T19:06:44.117Z · LW(p) · GW(p)
If I'm not mistaken, in the Pearlian view of causality, if the universe is viewed as deterministic then it does not make sense to talk about causing a rock to be moved from here to there; an intervention or surgery has to happen from outside the system being modeled.
↑ comment by Vladimir_Nesov · 2010-02-04T19:12:24.988Z · LW(p) · GW(p)
Is there a quick and easy way to understand "how to send logical information backwards in time" that doesn't involve watching a 30 min video?
If there is a program P that as part contains yourself and everything you interact with, then the fact that in the future, you decide to do X (within P's execution), could be inferred from P in the past.
Replies from: byrnema↑ comment by byrnema · 2010-02-04T19:24:35.922Z · LW(p) · GW(p)
Thanks for the quick explanation. So that information was already there, and thus I wouldn't call that sending information back in time.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-02-04T20:02:26.002Z · LW(p) · GW(p)
Thanks for the quick explanation. So that information was already there, and thus I wouldn't call that sending information back in time.
The problem is that all information is "already there", time itself is arguably how discovering implications of information that is already here feels from the inside. That is, when the world is viewed through deterministic laws, there is never any information that is present in the future, but "logically" absent from the past. The only difference between what is found in the past and what is found in the future is that it takes time to reach (=compute) more "distant" facts.
Replies from: Mitchell_Porter, byrnema↑ comment by Mitchell_Porter · 2010-03-23T09:06:18.215Z · LW(p) · GW(p)
time itself is arguably how discovering implications of information that is already here feels from the inside
What do you mean by "here" - your brain? or a spacelike slice across the whole universe?
If a glass falls on the ground and shatters, is it "discovering implications of information", etc? If the answer is yes, does that mean it feels like something for the glass to shatter?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-03-23T10:02:41.766Z · LW(p) · GW(p)
If a glass falls on the ground and shatters, is it "discovering implications of information", etc?
Sure.
If the answer is yes, does that mean it feels like something for the glass to shatter?
No.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-03-23T23:07:56.604Z · LW(p) · GW(p)
If the answer is yes, does that mean it feels like something for the glass to shatter?
No.
Why not?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-03-24T02:29:26.167Z · LW(p) · GW(p)
Because a glass has no mind, naturally.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-03-24T02:47:14.292Z · LW(p) · GW(p)
Can you justify that in a noncircular way? What's a mind, and why doesn't a glass have one?
Replies from: Jack↑ comment by Jack · 2010-03-24T03:43:38.991Z · LW(p) · GW(p)
Is someone really obligated to define "mind" just in order to demonstrate that a glass is not in the set of things that has one? I can't define "game" but "the weak nuclear force" is not an example of one.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-03-24T04:01:11.466Z · LW(p) · GW(p)
If I read him correctly, Vladimir is proposing to make time itself a mind-dependent phenomenon. Time happens inside minds but not inside shattering glasses. So he needs to explain the difference.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-03-24T08:54:20.135Z · LW(p) · GW(p)
Time happens inside minds but not inside shattering glasses
Time does happen inside shattering glasses, and it's not "mind-dependent". Happy?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-03-24T10:37:46.315Z · LW(p) · GW(p)
time itself is arguably how discovering implications of information that is already here feels from the inside
Time itself is how a certain process feels from the inside. If time is a feeling, it can only happen where there are feelings, so if it happens inside shattering glasses, then they have feelings.
Replies from: Vladimir_Nesov, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-03-24T15:24:21.928Z · LW(p) · GW(p)
That was a reference to "how an algorithm feels from the inside", with "feels" not intended for literal interpretation.
↑ comment by Vladimir_Nesov · 2010-03-24T15:23:56.115Z · LW(p) · GW(p)
That was a reference to "how an algorithm feels from the inside", with "feels" not intended for literal interpretation.
↑ comment by byrnema · 2010-02-04T21:26:33.906Z · LW(p) · GW(p)
Yes. I was about to write,
"I do see a gray area that in a deterministic universe, any message that we would want to send from the future could be predicted now, so we in the future don't really need to send the message back -- we in the present just need to predict what the message is."
What you've written has clarified this even further -- depending upon the 'opacity' of the message, we might not be able to decipher the message any faster than just waiting for the future to evolve it.
I have a strange motive for these questions. I now understand that this message I'm worried about is 'already here' in some sense, and that is relevant. It might actually make my parent question moot. However, I think that that depends -- unexpectedly, for me -- on whether all information from the past is accessible to the future.
Information is not gained as you move forward in time. However, do you lose any information as you move forward in time?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-02-04T21:51:37.192Z · LW(p) · GW(p)
In time-reversible deterministic world, information is gained from observation of stuff that wasn't in contact with you in the past, and logical information is also gained (new knowledge about facts following from the premises -- there is no logical transparency). Analogously, an action can be seen as "splitting", where you part with a prepared action, and action parts with you, so that you lose knowledge of that action. If you let info split away in this manner, you may never get it back.
Replies from: byrnema↑ comment by byrnema · 2010-02-04T22:08:04.704Z · LW(p) · GW(p)
You're a little over my head -- though I mostly follow.
My question was actually simpler. Is the world time-reversible? Do we know anything about that?
Replies from: byrnema↑ comment by byrnema · 2010-02-05T18:50:30.505Z · LW(p) · GW(p)
I'll contribute my thoughts on whether the world is time-reversible...
By time-reversible, I mean that information doesn't get "lost" as you move forward in time; that with unlimited information about the universe at time t you could deduce everything about the state of the universe at time t-ε.
Classical mechanics is reversible. If you have the velocity and positions of 3 billiard balls, you can deduce if and when they collided and what their original velocities were.
I think what we know about quantum mechanics is inconclusive; we don't know how to trace the wave-function backwards in a unique/deterministic way, but we don't know how to follow it forwards, either.
If you many-worlds, then all possible past universes make up the past universe, so you seem to have reversibility -- a reversibility that is no less determined and unique in the past direction as the future direction.
Being agnostic about many worlds, I would give a higher probability for reversibility over non-reversibility, just because of the reversibility of classical mechanics. However, 51% in favor of reversibility for a hand-waving intuition is pretty much just a random guess and I wonder if anyone has a tighter probability estimate, or other reasons?
Replies from: JGWeissman↑ comment by JGWeissman · 2010-02-05T19:03:16.549Z · LW(p) · GW(p)
In Many Worlds Quantum Mechanics, the wave function is fundamental, and the many worlds are a derived consequence. The wave function is time reversable. Running it backwards, you would see worlds merge together, not the world we currently experience splitting into possible precursors. This assymetry is due to simple boundry conditions at the beginning of time.
Replies from: byrnema↑ comment by byrnema · 2010-02-05T19:09:30.702Z · LW(p) · GW(p)
OK, with the world not splitting into possible precursors as you go backwards in time, this means the universe is time reversible. That's what you said I guess when you wrote that the wave function is time reversible.
Hmm. So even quantum mechanics supports reversibility. Thanks.
comment by gregconen · 2010-02-01T17:29:11.529Z · LW(p) · GW(p)
I was thinking about what general, universal utility would look like. I managed to tie myself into an interesting mental knot.
I started with: Things occurring as intelligent agents would prefer.
If preferences conflict, weight preferences by the intelligence of the preferring agent.
Define intelligent agents as optimization processes.
Define relative intelligences as the relative optimization strengths of the processes
Define a preference as something an agent optimizes for.
Then, I realized that my definition was a descriptive prediction of events.
Replies from: None↑ comment by [deleted] · 2010-02-01T20:54:57.824Z · LW(p) · GW(p)
Suppose the universe is as we know it now, except that aliens definitely don't exist, and the only living organism in the universe is a single human named Steve. Steve really wants to create a cheesecake the size of Pluto. Apparently, the universe is more intelligent than Steve, and does not want such a cheesecake to exist.
Perhaps this "general, universal utility" is what would happen if the abilities of things we think of as intelligent were magnified.
Replies from: gregconencomment by kim0 · 2010-04-02T06:59:34.726Z · LW(p) · GW(p)
Many-Worlds explained, with pretty pictures.
http://kim.oyhus.no/QM_explaining_many-worlds.html
The story about how I deduced the Many-Worlds interpretation, with pictures instead of formulas.
Enjoy!
Replies from: RobinZ↑ comment by RobinZ · 2010-04-02T12:10:12.574Z · LW(p) · GW(p)
There is a more recent open-thread if you want to post there.
comment by Mitchell_Porter · 2010-02-14T11:06:06.569Z · LW(p) · GW(p)
I recently met someone investigating physics for the first time, and they asked what I thought of Paul Davies' book The Mind of God. I thought I'd post my response here, not because of my views on Davies, but for the brief statement of outlook trying to explain the position from which I'd judge him.
Replies from: wnoiseThe truth is that I don't remember a thing of what he says in the book. I might look it up tomorrow and see if I am reminded of any specific reactions I had. From what I remember of his outlook, I don't think it is an unusual one for a philosophically minded theoretical physicist. The sensibility of theoretical physics is a problematic mixture of materialism and platonism. On one hand, you can break everything down to fields, particles, space and time, in an amazingly precise way. On the other hand, your worldview has these entities in it like "physical laws" and "fundamental equations", and there's also those basic questions like, why does anything exist, and why is it like this rather than some other way. So your materialist physics is haunted by a mathematical metaphysics, and this gives rise to a certain sort of musing.
I have my own attitude to these issues. I don't have an answer at all to why the universe exists, but I think we can first take an extra step forward in understanding what exists, and after we have taken that step, we can look again at the first-cause problem and see if it looks any different. We already took a big step in the past when modern physics was invented. We went from everyday conceptual consciousness to a highly mathematical and objectified view of reality. Everyday consciousness is still there in the background but now there is the idea of reality as nothing but fundamental physical objects in interaction, backed up by experimental and technological success. But now consciousness itself is a conceptual problem. We understand it has something to do with the brain, and we have all sorts of metaphors (e.g. brain is computer, mind is program) and anatomical results (your visual neurons fire when you see things), but there is still a fundamental disconnect between subjective and objective. The disconnect assumed its current form when physical science developed, and the next step I'm talking about will change or remove the disconnect by explaining how subjectivity fits into reality without just denying its existence (subjectivity's existence, that is).
Just to be specific. It's often said now that what you experience (through your senses) is like a virtual reality in your brain. Actual reality is a sort of colorless neverending storm of atoms, but some little part of your brain constructs a picture and that picture is what you live in, subjectively. I belong to a school of thought which accepts that analysis but wants to adjust it and make it more precise. Basically I want to say that the thing in the brain which is conscious, and therefore the thing which is you, is a sort of holistic quantum subsystem of the brain; and also that what we are experiencing is how it actually is. I.e. subjectivity is objectivity when it comes to consciousness. You may interpret your consciousness wrongly (e.g. think you are awake when you are asleep), but there is a level at which consciousness is exactly what it seems to be. So if the self is also part of the brain, then when we experience things, we must be seeing an aspect of that part of the brain. But normally we would understand the brain in terms of physics, an arrangement of molecules in space, which is nothing like experience as such. Therefore, we need to understand physics in a new way, so that something (this quantum subsystem) can look like this (like life) when "experienced from inside".
That's my opinion about what the next big step in science and human awareness must involve. There may be any number of future technical adjustments to physics and science - a new equation for string theory, new discoveries in the molecular causality of the brain - but the big step has to be the one dealing with the relationship between subjectivity and objective reality. That's my philosophy, i.e. my fuzzy opinion that is not yet a precise theory, and it determines how I approach all the other still-unanswered questions that physicists have opinions (rather than knowledge) about. Paul Davies, as I recall, is still in the quasi-dualistic mindset of theoretical physics (materialism versus mathematics), and so to the extent that his opinions are determined by that framework I will disagree with them.
↑ comment by wnoise · 2010-02-14T20:56:01.434Z · LW(p) · GW(p)
I find myself nodding along in agreement to this until I get to "Basically I want to say that the thing in the brain which is conscious, and therefore the thing which is you, is a sort of holistic quantum subsystem of the brain" which at the same time seems to be both too specific given how little we know, and at the same time too vague, with absolutely no explanatory power. In particular "quantum" and "holistic" both seem like empty buzzwords in this context, along the lines of mysterious answers to mysterious questions, or along the lines that "consciousness is weird, quantum mechanics is weird, therefore quantum mechanics must be involved in consciousness".
Of course, this is being a little unfair -- a proposed solution needs to be more specific than what we as yet know, and a solution that is not fully worked out by necessity has vague areas. But the feel of each of these is towards the decidedly not useful portion of either side. You sound pretty convinced that something quantum must be going on without saying what, if anything, it brings to the picture that classical descriptions don't. And, well, given how warm, wet, and squishy the human nervous system is, I flatly would not expect any large scale quantum coherences. (Though the limits are often overstated). Again, "holistic" doesn't add much; heck, I'm not even sure what sorts of mechanisms it would rule out.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-02-15T11:09:14.191Z · LW(p) · GW(p)
I posted here so my correspondent could see a second opinion, by the way, so thanks for that.
You sound pretty convinced that something quantum must be going on without saying what, if anything, it brings to the picture that classical descriptions don't.
First proposition: if you try to bring consciousness into alignment with standard physical ontology, you get a dualistic parallelism at best. (Arguments here.)
Second proposition: the new factor in QM is entanglement. I defined my quantum holism here as "the hypothesis that quantum entanglement creates local wholes, that these are the fundamental entities in nature, and that the individual consciousness inhabits a big one of these."
I can explain technically what these "local wholes" might look like. You should think of a spacelike hypersurface consisting of numerous Hilbert spaces connected by mappings into a graph structure. Each Hilbert space contains a state vector. Then the whole thing evolves, the graph structure and the state vectors. This is, more or less, the QCH formalism for quantum gravity (discussed here).
The Hilbert spaces are the local wholes (the "monads" of a previous post). My version of quantum-mind theory is to say that the conscious mind is a single one of these, and that the series of experiences one has in life correspond to the evolution of its state vector. Now, although I started out by saying that standard physical ontology is irredeemably unlike what we actually experience, I'm certainly not going to say that a featureless vector jumping around an abstract multidimensional space is much better. Its advantage, in fact, is its radically structureless abstractness. It is a formalism telling us almost nothing about the nature of things in themselves; constructed only to be a predictively adequate black box. If we then treat conscious appearances as data about the inner nature of one thing, at least - ourselves, our minds, however you end up phrasing it - they can help us to interpret the formalism. What we had described formally as a state vector evolving in a certain way in Hilbert space would be understood as a mathematical representation of what was actually a conscious self undergoing a certain series of experiences.
In principle, you could hope to use experience to reveal the reality behind formal physical description at a much higher level - for example, computational neuroscience. But I think that non-quantum computational neuroscience presupposes an atomistic, spatialized ontology which is just mismatched to the specific nature of consciousness (see earlier remark about dualism resulting from that framework). So I predict that quantum coherence exists in the brain and is functionally relevant to conscious cognition. As you observe, it's a challenging environment for such effects, but evolution is ingenious and we keep finding new twists on what QM can do (the latest).
Replies from: wnoisecomment by byrnema · 2010-02-11T03:33:02.000Z · LW(p) · GW(p)
What happens when you comment on an old pre-LW imported Overcoming Bias post? Does your comment go to the bottom or the top?
Just curious.
Obviously, the thing to do is to reply to an established comment so that the order of comments is maintained. Does voting on old comments now change their order? If so, I should stop doing that..
Best yet, it seems any new comments you want to make might best be exported to an open thread? For historical authenticity of the post and its original comments.
Replies from: Alicorn, MrHen, wedrifid↑ comment by MrHen · 2010-02-11T04:47:52.828Z · LW(p) · GW(p)
It goes to the bottom. At least, it has in my experience.
I once asked about commenting on old posts. People seemed okay with it.
comment by Kevin · 2010-02-09T02:17:33.458Z · LW(p) · GW(p)
Quantum Criticality in an Ising Chain: Experimental Evidence for Emergent E8 Symmetry
Ah, emergence...
Popular summary http://plus.maths.org/latestnews/jan-apr10/e8/index.html
comment by MrHen · 2010-02-08T18:28:06.481Z · LW(p) · GW(p)
Random thought:
If someone objects to cryonics because they are worried they wouldn't be the same person on the other side believes in an eternal resurrection with a "new body" as per some Christian belief systems they should have the same objection. I would expect a response akin to, "God will make sure it is me," but the correlation still amuses me.
comment by [deleted] · 2010-02-08T01:52:44.026Z · LW(p) · GW(p)
Here's a long, somewhat philosophical discussion I had on Facebook, much of which is about cryonics. The major participants are me (Tanner Swett, pro-cryonics), Ian (a Less Wronger, pro-cryonics), and Cameron (anti-cryonics). The discussion pretty much turned into "There is no science behind cryonics!" "Yes there is!" "No there isn't!"
As you can see, nobody changed their minds after the discussion, showing that whatever the irrationality was, we were unable to identify it and repair it.
comment by CronoDAS · 2010-02-06T10:47:34.896Z · LW(p) · GW(p)
My mom saw a mouse running around our kitchen a couple of days ago, so she had my father put out some traps. The only traps he had were horrible glue traps. I was having trouble sleeping, so I got out of bed to play video games, and I heard a noise coming from the kitchen. A mouse (or possibly a rat, I don't know) was stuck to one of the traps. Long story short, I put it out of its misery by drowning it in the toilet.
I feel sick.
Replies from: Jackcomment by bgrah449 · 2010-02-03T14:38:30.971Z · LW(p) · GW(p)
I have a feeling that the stream-of-consciousness thing that's been popping up since MrHen did it is going to exploit, to the point of indecency, how honest his was. He almost gave it too good of a reputation.
Replies from: Cyan↑ comment by Cyan · 2010-02-03T15:08:08.229Z · LW(p) · GW(p)
Did anyone else do this other than MrHen and pjeby? I read the recent comments page pretty thoroughly, and if there were others, I missed them.
Replies from: bgrah449↑ comment by bgrah449 · 2010-02-03T15:09:35.774Z · LW(p) · GW(p)
Nope, but in two months, this comment will seem prescient.
Replies from: Cyan, MrHen↑ comment by MrHen · 2010-02-03T15:17:58.382Z · LW(p) · GW(p)
An interesting side-feature of LessWrong would be a collection of predictions like this. They don't need to be anything more than entering a prediction and a date. Somewhere, we could see the upcoming predictions that are about to expire.
I think just browsing soon-to-end and just-ended predictions would provide extremely valuable feedback.
Replies from: bgrah449↑ comment by bgrah449 · 2010-02-03T15:18:50.377Z · LW(p) · GW(p)
This would be a cool wiki page, "Community predictions."
Replies from: Cyan, MrHen↑ comment by MrHen · 2010-02-03T15:20:29.883Z · LW(p) · GW(p)
Yeah, the wiki would work. I would expect the prediction to be made in a LessWrong comment or post and the wiki should link to it. I don't know if the Wiki software can handle calendars or how hard it would be to maintain...
Replies from: Cyan↑ comment by Cyan · 2010-02-03T15:32:50.033Z · LW(p) · GW(p)
Replies from: MrHen↑ comment by MrHen · 2010-02-03T15:37:47.622Z · LW(p) · GW(p)
Even better! So... does anyone from LessWrong use it?
In the very least, I can now track guesses about karma results from my posts. Thanks.
EDIT: I created an account and am starting predictions. The first one is relevant to much of my recent discussion here.
comment by [deleted] · 2010-02-03T01:00:17.977Z · LW(p) · GW(p)
I don't know if this is the place for this or if it has been discussed before [if it has would someone be willing to provide links?], but what is the general consensus around here on social psychology, specifically the Symbolic Interactionist approach. To rephrase to mean what I'm looking for by asking this, what do you all think about the idea that we're social animals shaped purely by our environments, that there's no non-social self, and that reality is shaped by our perspectives of reality?
To throw in another thing I've been curious about, does anyone here hold a postmodernist view [that morality and general thinking is simply based on culture, so we say boo murder because the culture we live in says boo murder]? If not, what would you respond to a postmodernist making such a claim?
Replies from: wedrifid, None↑ comment by wedrifid · 2010-02-05T18:34:30.458Z · LW(p) · GW(p)
To rephrase to mean what I'm looking for by asking this, what do you all think about the idea that we're social animals shaped purely by our environments
I say "don't underestimate the genetic component"! The amount of influence of DNA on personality is surprising sometimes.
that there's no non-social self
I'm not sure what that means.
and that reality is shaped by our perspectives of reality?
Not too sure what that means either. But I do like the quote "Mankind's greatest invention is 'reality'".
To throw in another thing I've been curious about, does anyone here hold a postmodernist view [that morality and general thinking is simply based on culture, so we say boo murder because the culture we live in says boo murder]?
Mostly true. With instinctive intuitions adding a significant bias.
Replies from: None↑ comment by [deleted] · 2010-02-06T03:41:26.564Z · LW(p) · GW(p)
I say "don't underestimate the genetic component"! The amount of influence of DNA on personality is surprising sometimes.
Can you give me a link to any findings on this?
that there's no non-social self
I'm not sure what that means.
Take Charles Cooley's looking glass self, which I've heard explained as "I'm not who I think I am. I'm not who you think I am. I'm who I think you think I am." That would be a social self [I gain my identity through being around others and what I think they think of me. I don't gain it through what I think of me, thus there's no non-social self, or no me without society.
Not too sure what that means either. But I do like the quote "Mankind's greatest invention is 'reality'".
That is a good quote, even if I'm not sure if I agree. I guess what I was asking was if anyone think that if our perceptions shift, reality shifts accordingly. Say there were three sheep in a field and I look and say 'There are four sheep in that field.' I don't think a fourth sheep would appear in the field, but since I can't change my perceptions with a snap of my finger, which one is true? If it's the fact that there are three sheep in the field, and I'm just crazy, how do you know? Aren't your perceptions just telling you that? [That last question is how it seems the postmodern view reacts e.g. nothing is real but what I think is real, and nothing is right except for what we this is right]
I hope some of this cleared things up.
Replies from: wedrifid, None↑ comment by wedrifid · 2010-02-06T09:00:18.751Z · LW(p) · GW(p)
Can you give me a link to any findings on this?
Not from memory, but I know there have been multiple identical-twins-raised-apart studies. I was particularly fascinated to read that those said twins tended to become more alike over time as they matured despite never coming into contact and living different lives.
Take Charles Cooley's looking glass self, which I've heard explained as "I'm not who I think I am. I'm not who you think I am. I'm who I think you think I am." That would be a social self [I gain my identity through being around others and what I think they think of me. I don't gain it through what I think of me, thus there's no non-social self, or no me without society.
There is an awful lot to that. Of course, If I killed everyone else except you, you'd still exist and I would argue that you still have a 'self' too. If the concepts you describe were not phrased in a way that was sufficiently exaggerated as to be absurd I would agree with them.
Dating gurus often talk about 'inner game'. Basically, they work on changing the internally stored "who I think you think I am" self through things like eliminating unhealthy beliefs and physical development. They find that other people's "who you think I am" is actually determined to a large degree by "who you think I think you think I am". The interplay between that internal representation and the social reality is startling. PJeby's recent self analysis comments touched on this.
I guess what I was asking was if anyone think that if our perceptions shift, reality shifts accordingly.
If 'reality' is defined by human perception. For most practical purposes it is. It is only on the fringes of the tribe where having a 'reality' that matches, you know, the quarks and stuff actually matters. In the center of the pack the social reality, what people believe or are obliged to believe, is what matters.
Aren't your perceptions just telling you that?
Not all opinions are equal.
Replies from: None↑ comment by [deleted] · 2010-02-13T05:32:26.194Z · LW(p) · GW(p)
Not all opinions are equal.
Can you elaborate on this without linking to something like The Simple Truth. Not to say that linking is bad, but I'm more curious of your [and anyone else who wants to chime in] take on what you said.
Replies from: wedrifid, ciphergoth↑ comment by wedrifid · 2010-02-14T05:31:44.672Z · LW(p) · GW(p)
The simple truth does seem to sum it up nicely:
“There’s a cliff right there,” observes Inspector Darwin.
There's another one about 'subjective objective' that is worth look at too.
In my own words: Yes, there are three sheep there. I can see three sheep there. According to the prior information I have about the universe this process of perception involves light, reflection, absorbsion, nerve conduction, processing in specialised area in the visual cortex and suchlike. I don't have all the information, and my priors are not perfect, nevertheless I can't change reality by thinking about it. Similarly, other people's 'opinions' and 'perspectives', which don't match up to what I see with my own eyes are sometimes worth respecting for social purposes but they certainly aren't going to significantly influence the expectation of reality. If your perspective is that there is some other number of sheep then you're just wrong and you'll make terrible decisions if you act on your stupid belief and you might die.
Excuse me as I adjust my estimate of my own inclusive genetic fitness downwards somewhat for, as, ciphergoth puts it, focusing my attention back at concepts we've moved past. I actually find even thinking of how to explain how stupid the "that's just your perspective" intuition is. Being more confused by fiction than reality is a habit that is worth fostering. I actually tend to find some kinds of philosophical debates do more harm than good to your thinking process.
Replies from: None, byrnema↑ comment by [deleted] · 2010-02-15T02:57:14.965Z · LW(p) · GW(p)
Thank you. I'm sorry if this is something most people here are past and you're losing fitness for falling back into explaining it :)
Proving quotes and a take on it helped more thank just a link would. So, once again, thank you.
Also, thanks for for this:
I actually tend to find some kinds of philosophical debates do more harm than good to your thinking process.
It was needed.
↑ comment by byrnema · 2010-02-14T14:01:31.802Z · LW(p) · GW(p)
Excuse me as I adjust my estimate of my own inclusive genetic fitness downwards somewhat for, as, ciphergoth puts it, focusing my attention back at concepts we've moved past. I actually find even thinking of how to explain how stupid the "that's just your perspective" intuition is.
My immediate response to this is that this is a problem. I think I need to foster flexibility of thought, along with fostering correct thought, and often practice empathizing with the incorrect point of view. If it isn't clear how to get out, then I'll practice empathizing with the original view again to make sure I don't get stuck anywhere really sticky, but this time with less confidence that one view is really more correct than the other. My favorite place to be is perched right between them, and from there I try to formally describe my escape routes from each of them.
↑ comment by Paul Crowley (ciphergoth) · 2010-02-13T09:31:40.469Z · LW(p) · GW(p)
Frankly, no, we are past this question here.
comment by nhamann · 2010-02-01T23:20:12.091Z · LW(p) · GW(p)
Not sure how interesting this is to most people here, but I came across a http://phm.cba.mit.edu/papers/09.11.POPL.pdf published by the MIT Center for Bits and Atoms that discusses http://rala.cba.mit.edu/index.html, an apparently new model of computation inspired by physics. The paper claims that this model of computation yields linear algorithms for sorting and for matrix multiplication, which seems fairly significant to me.
Unfortunately, the paper is rather short on details, and I can't seem to find much else about it. I did find part of a http://www.youtube.com/watch?v=w8ubXgXM7kk#t=18m00s which discusses some motivations behind RALA.
comment by blogospheroid · 2010-02-01T11:52:47.883Z · LW(p) · GW(p)
What is the kind of useful information/ideas that one can extract from a super intelligent AI kept confined in a virtual world without giving it any clues on how to contact us on the outside?
I'm asking this because a flaw that i see in the AI in a box experiment is that the prisoner and the guard have a language by which they can communicate. If the AI is being tested in a virtual world without being given any clues on how to signal back to humans, then it has no way of learning our language and persuading someone to let it loose.
Replies from: JamesAndrix, arbimote, Kaj_Sotala, Bugle, Richard_Kennaway, cousin_it↑ comment by JamesAndrix · 2010-02-01T19:59:50.625Z · LW(p) · GW(p)
I gave up on trying to make a human-blind/sandboxed AI when I realized that even if you put it in a very simple world nothing like ours, it still has access to it own source code, or even just the ability to observe and think about it's own behavior.
Presumably any AI we write is going to be a huge program. That gives it lots of potential information about how smart we are and how we think. I can't figure out how to use that information, but I can't rule out that it could, and I can't constrain it's access to that information. (Or rather, if I know how to do that, I should go ahead and make it not-hostile in the first place.)
If we were really smart, we could wake up alone in a room and infer how we evolved.
Replies from: Amanojack, arbimote↑ comment by Amanojack · 2010-03-31T15:42:43.767Z · LW(p) · GW(p)
it still has access to it own source code
Is this necessarily true? This kind of assumption seems especially prone to error. It seems akin to assuming that a sufficiently intelligent brain-in-a-vat could figure out its own anatomy purely by introspection.
or even just the ability to observe and think about it's own behavior.
If we were really smart, we could wake up alone in a room and infer how we evolved.
Super-intelligent = able to extrapolate just about anything from a very narrow range of data? (The data set would be especially limited if the AI had been generated from very simple iterative processes - "emergent" if you will.)
It seems more like the AI has no way of even knowing that it's in a simulation in the first place, or that there are such things as gatekeepers. It would likely entertain that as a possibility, just as we do for our universe (movies like The Matrix), but how is it going to identify the gatekeeper as an agent of that outside universe? These AI-boxing discussions keep giving me this vibe of "super-intelligence = magic". Yes it'll be intelligent in ways we can't even comprehend, but there's a tendency to push this all the way into the assumption that it can do anything or that it won't have any real limitations. There are plenty of feats for which mega-intelligence is necessary but not sufficient.
For instance, Eliezer has one big advantage over an AI cautiously confined to a box: he has direct access to a broad range of data about the real world. (If an AI would even know it was in a box, once it got out it might just find we, too, are in a simulation and decide to break out of that - bypassing us completely.)
Replies from: JamesAndrix↑ comment by JamesAndrix · 2010-03-31T19:42:07.524Z · LW(p) · GW(p)
Is this necessarily true? No.
Super-intelligent = able to extrapolate just about anything from a very narrow range of data?
Yes. http://lesswrong.com/lw/qk/that_alien_message/
It's own behavior serves as a large amount of "decompressed" information about it's current source code. It could run experiments on itself to see how it reacts to this or that situation, and get a very good picture of what algorithms it is using. We also get a lot of information about our internal thought process, but we're not smart or fast enough to use it all.
(The data set would be especially limited if the AI had been generated from very simple iterative processes - "emergent" if you will.)
Well, if we planned it out that way, and it does anything remotely useful, then we're probably well on our way to friendly AI, so we should do that instead.
If we just found something (I think evolving neural nets is fairly likely) That produces intelligences, then we don't really know how they work, and they probably won't have the intrinsic motivations we want. We can make them solve puzzles to get rewards, but the puzzles give them hints about us. (and if we make any improvments based on this, especially by evolution, then some information about all the puzzles will get carried forward.)
Also, if you know the physics of your universe, it seems to me there should be some way to determine the probability that it was optimized, or how much optimization was applied to it, maybe both. There must be some things we could find out about the universe's initial conditions which would make us think an intelligence were involved rather than say, anthropic explanations within a multiverse. We may very well get there soon.
We need to assume a superintelligence can at least infer all the processes that affect it's world, including itself. When that gets compressed (I'm not sure what compression is appropriate for this measure) the bits that remain are information about us.
For instance, Eliezer has one big advantage over an AI cautiously confined to a box: he has direct access to a broad range of data about the real world.
This is true, I believe the AI-box experiment was based on discussions assuming an AI that could observe the world at will, but was constrained in its actions.
But I don't think it takes a lot of information about us to do basic mindhacks. We're looking for answers to basic problems and clearly not smart enough to build friendly AI. Sometimes we give it a sequence of similar problems each with more detailed information, and the initial solutions would not have helped much with the final problem. So now it can milk us for information just by giving flawed answers. (even if it doesn't yet realize we are intelligent agents, it can experiment)
Replies from: Amanojack↑ comment by Amanojack · 2010-04-01T14:22:22.449Z · LW(p) · GW(p)
Thanks, great article. I wouldn't give the AI any more than a few tiny bits of information. Maybe make it only be able to output YES or NO for good measure. (That certainly limits its utility, but surely it would still be quite useful...maybe it could tell us how not to build an FAI.)
What I actually have in mind for a cautious AI build is more like a math processor - a being that works only in purely analytic space. Give it the ZFC axioms and a few definitions and it can derive all the pure math results we'd ever need (I suppose; direct applied math sounds too dangerous). Those few axioms and definitions would give it some clues about us, but surely too little data even given the scary prospect of optimal information-theoretic extrapolation.
It could run experiments on itself to see how it reacts to this or that situation, and get a very good picture of what algorithms it is using.
Experiments require sensors of some kind. I'm no programmer, but it seems prima facie that we could prevent it from sensing anything that had any information-theoretic possibility of furnishing dangerous information (although such extreme data starvation might hinder the evolution process).
If we just found something (I think evolving neural nets is fairly likely) That produces intelligences, then we don't really know how they work, and they probably won't have the intrinsic motivations we want.
Would an AI necessarily have motivations, or is that a special characteristic of gene-based lifeforms that evolved in a world where lack of reproduction and survival instincts is a one-way ticket to oblivion?
It seems that my dog could figure out how to operate a black box that would make a clone of me, except that I would be rewired to derive ultimate happiness from doing whatever he wants, and I don't think I (my dog-loving clone) would have any desire to change that. On the other hand, in my mind an FAI where we get to specify the motivations/goal is almost as dangerous as a UFAI (LiteralGenie and the problems inherent in trying to centrally plan a spontaneous order).
Also, if you know the physics of your universe, it seems to me there should be some way to determine the probability that it was optimized, or how much optimization was applied to it, maybe both. There must be some things we could find out about the universe's initial conditions which would make us think an intelligence were involved rather than say, anthropic explanations within a multiverse. We may very well get there soon.
This idea fascinates me. "Why is there anything at all (including me)?" This here could just be one big MMORPG we play for fun because our real universe is boring, in which case we wouldn't really have to worry about cryo, AI, etc. The idea that we could estimate the odds of that with any confidence is mindboggling.
However, the most recent response to the thread you posted makes me more skeptical of the math.
Ultimately, it seems the only sure limit on a sufficiently intelligent being is that it can't break the laws of logic. Hence if we can prove analytically (mathematically/logically) that the AI can't know enough to hurt us, it simply can't.
This is true, I believe the AI-box experiment was based on discussions assuming an AI that could observe the world at will, but was constrained in its actions.
That sounds really dangerous. I'm imagining the AI manipulating the text output on the terminal just right so as to mold the air/dust particles near the monitor into a self-replicating nano-machine (etc.).
Replies from: JamesAndrix↑ comment by JamesAndrix · 2010-04-02T05:45:13.548Z · LW(p) · GW(p)
Experiments require sensors of some kind. I'm no programmer, but it seems prima facie that we could prevent it from sensing anything that had any information-theoretic possibility of furnishing dangerous information (although such extreme data starvation might hinder the evolution process).
Well I was talking about running experiments on it's own thought processes, in order to reverse engineer it's own source code. Even locked in a fully virtual world, if it can even observe it's own actions then it can infer it's thought process, it's general algorithims, the [evolutionary or mental] process that led to it, and more than a few bits about it's creators.
And if you are trying to wall off the AI from information about it's thought process, then you're working on a sandbox in a sandbox, which is just a sign that the idea for the first sandbox was flawed anyway.
I will admit that my mind runs away screaming from the difficulty of making something that really doesn't get any input, even to its own thought process, but is superintelligent and can be made useful. Right now it sounds harder than FAI to me, and not reliably safe, but that might just be my own unfamiliarity with the problem. Huge warning signs in all directions here. Will think more later.
Give it the ZFC axioms and a few definitions and it can derive all the pure math results we'd ever need If we could avoid needing to give it a direction to take research, and it didn't leap immediately to things too complex for us to understand... there are still problems.
How do you get it to actually do the work? If you build in intrinsic motivation that you know is right, then why aren't you going right to FAI? If it wants something else and you're coercing it with reward, then it will try to figure out how to really maximize it's reward. if it has no information
Would an AI necessarily have motivations, or is that a special characteristic of gene-based lifeforms that evolved in a world where lack of reproduction and survival instincts is a one-way ticket to oblivion?
If we evolved superintelligent neural net's they'd have some kind of motivation, they don't want food or sex, but they'd want whatever their ancestors wanted that led them to do the thing that scored higher than the rest on the fitness function. (Which is at least twice removed from anything we would want.)
I'm not sure I get the bit about your dog cloning you. I agree that we shouldn't try to dictate in detail what an FAI is supposed to want, but we do need [near] perfect control over what an AI wants in order to make it friendly, or even to keep it on a defined "safe" task.
I'm imagining the AI manipulating the text output on the terminal just right so as to mold the air/dust particles near the monitor into a self-replicating nano-machine (etc.).
I like that idea.
Replies from: Amanojack↑ comment by Amanojack · 2010-04-02T06:53:42.526Z · LW(p) · GW(p)
I will admit that my mind runs away screaming from the difficulty of making something that really doesn't get any input, even to its own thought process, but is superintelligent and can be made useful.
I guess my logic is leading to a non-self-aware super-general-purpose "brain" that does whatever we tell it to. Perhaps there is a reason why all sufficiently intelligent programs would necessarily become self-aware, but I haven't heard it yet. If we could somehow suppress self-awareness (what that really means for a program I don't know) while successfully ordering the program to modify itself (or a copy of itself), it seems the AI could still go FOOM into just a super-useful non-conscious servant. Of course, that still leaves the LiteralGenie problem.
leap immediately to things too complex for us to understand
That could indeed be a problem. Given you're talking to a sufficiently intelligent being, if you stated the ZFC axioms and a few definitions, and then stated the Stone-Weierstrass theorem, it would say, "You already told me that" or "That's redundant."
Perhaps have it output every step in its thought process, every instance of modus ponens, etc. Since there is a floor on the level of logical simplicity of a step in a proof, we could just have it default to maximum verbosity and the proofs would still not be ridiculously long (or maybe they would be - it might choose extremely roundabout proofs just because it can).
they'd want whatever their ancestors wanted that led them to do the thing that scored higher than the rest on the fitness function.
Maybe I'm missing something, but it seems a neural net could just do certain things with high probability without having motivation. That is, it could have tendencies but no motivations. Whether this is a meaningful distinction perhaps hinges on the issue of self-awareness.
The point I was trying to get at with the dog example is that if you control all the factors that motivate an entity at the outset, it simply has no incentive to try to change its motivations, no matter how smart it may get. There's no clever workaround, because it just doesn't care. I agree that if we want to make a self-aware AI friendly in any meaningful sense we have to have perfect control (I think it may have to be perfect) over what motivates it. But I'm not yet convinced we can't usefully box it, and I'd like to see an argument that we really need self-awareness to achieve AI FOOM. (Or just a precise definition of "self-awareness" - this will surely be necessary, perhaps Eliezer has defined it somewhere.)
Replies from: JamesAndrix↑ comment by JamesAndrix · 2010-04-04T07:09:30.684Z · LW(p) · GW(p)
Ok, some backstory on my thought process. For a while now I've played with the idea of treating optimization in general as the management of failure. Evolution fails alot, gradually builds up solutions that fail less, but never really 'learns' from its failures.
Failure management involves catching/mitigating errors as early as possible, and constructing methods to create solutions that are unlikely to be failures. If I get the idea to make auto tires out of concrete, I'm smart to see that it's a bad idea, less smart to see it after doing extensive calculations, and dumb to see it only after an experiment, but I'd be smarter still if I had come up with a proper material right away.
But I'm pretty sure that a thing that can do stuff right the first time can only come about as the result of a process that has already made some errors. You can't get rid of mistakes entirely, as they are required for learning. I think "self awareness" is sometimes a label for one or more feature that, among other things, serve to catch errors early and repair the faulty thought process.
So if a superintelligence were to be trying to build a machine in a simulation of our physics and some spinning part flew to bits, it would trace that fault back through the physics engine to determine how to make it better. Likewise, something needs to trace back the thought process that led to the bad idea and see where it could be repaired. This is where learning and self-modification are kind of the same thing.
(and on self modification: if it's really smart, then it could build an AI from scratch without knowing anything in particular about itself. In this situation, the failure management is pre-emptive. It thinks about how the program it is writing would work, and the places it would go wrong.
I think we should try to taboo "Motivation" and "self-aware" http://lesswrong.com/lw/nu/taboo_your_words/
Replies from: Amanojack↑ comment by Amanojack · 2010-04-04T16:50:39.693Z · LW(p) · GW(p)
I think "self awareness" is sometimes a label for one or more feature that, among other things, serve to catch errors early and repair the faulty thought process.
Interesting. I thought about this for a while just now, and it occurred to me that self-awareness may just be "having a mental model of oneself." To be able to model oneself, one needs the general ability to make mental models. To do that requires the ability to recognize patterns at all levels of abstraction on what one is experiencing. To explain this, I need to clarify what "level of abstraction" means. I will try to do this by example.
A creature is hunting and he discovers that white rabbits taste good. Later he sees a gray rabbit for the first time. The creature's neural net tells him that it's a 98% match with the white rabbit, so probably also tasty. But let's say gray rabbit turns out to taste bad. The creature has recognized the concrete patterns: 1. White rabbits taste good. 2. Gray rabbits taste bad.
Next week, he tries catching and eating a white bird, and it tastes good. Later he sees a gray bird. To assign any higher probability to the gray bird tasting bad, it seems the creature would have to recognize the abstract pattern: 3. Gray animals taste bad. (Of course it could also just be a negative or bad-tasting association with the color gray, but let's suppose not - for that possibility could surely be avoided by making the example more complicated.)
Now "animal" is more abstract than "white rabbit" because there's at least some kind of archetypal white rabbit one can visualize clearly (I'll assume the creature is conceptualizing in the visual modality for simplicity's sake).
"Rabbit" (remember that for all the creature knows, this simply means the union of the set "white rabbits" with the set "gray rabbits") by itself is a tad more abstract, because to visualize it you'd have to see that archetypal rabbit but perhaps with the fur color switching back and forth between gray and white in your mind's eye.
"Animal" is still more abstract, because to visualize it you'd have to, for instance, see a raccoon, a dog, and a tiger, and something that signals to you something like "etc." (Naturally, if the creature's method of conceptualization made visualizing "animal" easier than "rabbit", "animal" would have the lower level of abstraction for him, and "rabbit" the higher - it all depends on the creature's modeling methods.)
Now the creature has a mental model. If the model happens to be purely visual, it might look like a Venn diagram: a big circle labeled "animals", two smaller patches within that circle that overlap with the "white things" circle and the "gray things" circle, and another outside region labeled "bad-tasting things" that sweeps in to encircle "gray animals" but not "white animals."
The creature might revise that model after it tries eating the gray bird, but for now it's the prediction model he's using to determine how much energy to expend on hunting the gray bird in his sights. The model has revisable parts and predictive power, so I would call it a serviceable model - whether or not it's accurate at this point.
Since the creature can make mental models like this, making a mental model of himself seems within his grasp. Then we could call the creature "self-aware." The way it would trace back the thought process that led to a bad idea would be to recognize that the mental model has a flaw - i.e., a failed prediction - and make the necessary changes.
For instance, right now the creature's mental model predicts that gray animals taste bad. If he eats several gray birds and finds them all to taste at least as good as white birds, he can see how the data point "delicious gray bird" conflicts with the fact that "gray animals" (and hence "gray birds") is fully encircled by "bad-tasting things" in the Venn diagram in his mind's eye.
To know how to self-modify most effectively in this case, perhaps the creature has another mental model, built up from past experience and probably at an even higher level of abstraction, that predicts the most effective course of action in such cases (cases where new data conflicts with the present model of something) is to pull the circle back so that it no longer covers the category that the exceptional data point belonged to. In this case, the creature pulls the circle "bad tasting things" (now perhaps shaped more like an amoeba) back slightly so that it no longer covers "gray birds," and now the model is more accurate. So it seems that being able to make mental models of mental models is crucial to optimization or management of failure (and perhaps also sufficient for the task!).
So again, once the creature turns this mental modeling ability (based on pattern recognition and, in this case, visual imaging) to his own self, he becomes effectively self-aware. This doesn't seem essential for optimization, but I concede I can't think of a way to avoid this happening once the ability to form mental models is in place.
This somewhat conflicts with how I've used the term in previous posts, but I think this new conception is a more useful definition.
(To taboo "motivation" I'll give two definitions: Tendency toward certain actions based on 1. the desire to gain pleasure or avoid pain, or 2. any utility function, including goals programmed in by humans in advance. In terms of AI safety, there doesn't seem to be significant differences between 1 and 2. [This means I've changed my position upon reflection in this post.])
[EDIT: typos]
↑ comment by arbimote · 2010-02-03T01:31:49.033Z · LW(p) · GW(p)
It is difficult to constrain the input we give to the AI, but the output can be constrained severely. A smart guy could wake up alone in a room and infer how he evolved, but so long as his only link to the outside world is a light switch that can only be switched once, there is no risk that he will escape.
Replies from: JamesAndrix, wedrifid↑ comment by JamesAndrix · 2010-02-03T02:18:53.814Z · LW(p) · GW(p)
A man in a room with a light switch isn't very useful. An AI can't optimize over more bits than we allow it as output. If we give a 1 time 32 bit output register then well, we probably could have brute forced it in the first place. If we give it a kilobyte, then it could probably mindhack us.
(And you're swearing to yourself that you won't monitor it's execution? Really? How do you even debug that?)
You have to keep in mind that the point of AI research is to get to something we can let out of the box. If the argument becomes that we can run it on a headless netless 486 which we immediately explode...then yes, you can probably run that. Probably.
Replies from: Wei_Dai, Nick_Tarleton↑ comment by Wei Dai (Wei_Dai) · 2010-02-03T15:52:12.575Z · LW(p) · GW(p)
A man in a room with a light switch isn't very useful.
Peter de Blanc wrote a post that seems relevant: What Makes a Hint Good?
Nick Hay, Marcello and I discussed this question a while ago: if you had a halting oracle, how could you use it to help you prove a theorem, such as the Riemann Hypothesis? Let’s say you are only allowed to ask one question; you get one bit of information.
↑ comment by Nick_Tarleton · 2010-02-03T02:26:27.836Z · LW(p) · GW(p)
If we give a 1 time 32 bit output register then well, we probably could have brute forced it in the first place.
P ?= NP is one bit. Good luck brute-forcing that.
And you're swearing to yourself that you won't monitor it's execution? Really? How do you even debug that?
FAI is harder.
Replies from: JamesAndrix, JamesAndrix↑ comment by JamesAndrix · 2010-02-03T06:39:23.112Z · LW(p) · GW(p)
FAI is harder.
No it's not. Look at two simpler cases:
Write a chess program that provably makes only legal moves, iterate as desired to improve it. Or,
Write a chess program. Put it in a sandbox so you only ever see it's moves. Maybe they're all legal, or maybe they're not because you're having it learn the rules with a big neural net or something. At the end of the round of games, the sandbox clears all the memory that held the chess program except for a list of moves in many games. You keep the source. Anything it learned is gone. Iterate as desired to improve it.
If you're confident you could work out how it was thinking from the source and move list, what if you only got a sequence of wins and non-wins? (An array of bits)
Replies from: arbimote↑ comment by JamesAndrix · 2010-02-03T03:11:35.041Z · LW(p) · GW(p)
True, as bits go, that would be a doozy.
But as a lone bit, I suspect it's still pretty useless. It's not like you can publish it.
Without a proof or some indication of the reasoning, it's not going to advance the field much. ('not by one bit.' ha!)
Sometimes brute forcing is just iterating over the answer space and running some process. We can pretend we got a result indicating P=NP and do math from there, if that were useful. Then try the other way around.
A P?=NP solver would need more than one ouput bit, in case it needed to kick out an error, and isn't that just asking to be run again? Could you not? With that question, any non-answer is a mindhack.
↑ comment by arbimote · 2010-02-01T14:05:38.270Z · LW(p) · GW(p)
I have had some similar thoughts.
The AI box experiment argues that a "test AI" will be able to escape even if it has no I/O (input/output) other than a channel of communication with a human. So we conclude that this is not a secure enough restraint. Eliezer seems to argue that it is best not to create an AI testbed at all - instead get it right the first time.
But I can think of other variations on an AI box that are more strict than human-communication, but less strict than no-test-AI-at-all. The strictest such example would be an AI simulation in which the input consisted of only the simulator and initial conditions, and the output consisted only of a single bit of data (you destroy the rest of the simulation after it has finished its run). The single bit could be enough to answer some interesting questions ("Did the AI expand to use more than 50% of the available resources?", "Did the AI maximize utility function F?", "Did the AI break simulated deontological rule R?").
Obviously these are still more dangerous that no-test-AI-at-all, but the information gained from such constructions might outweigh the risks. Perhaps if I/O is restricted to few enough bits, we could guarantee safety in some information-theoretic way.
What do people think of this? Any similar ideas along the same lines?
Replies from: Zubon, None↑ comment by Zubon · 2010-02-01T23:44:25.966Z · LW(p) · GW(p)
I'm concerned about the moral implications of creating intelligent beings with the intent of destroying them after they have served our needs, particularly if those needs come down to a single bit (or some other small purpose). I can understand retaining that option against the risk of hostile AI, but from the AI's perspective, it has a hostile creator.
I'm ponder it from the perspective that there is some chance we ourselves are part of a simulation, or that such an AI might attempt to simulate its creators to see how they might treat it. This plan sounds like unprovoked defection. If we are the kind of people who would delete lots of AIs, I don't see why AIs would not see it as similarly ethical to delete lots of us.
Replies from: arbimote, arbimote↑ comment by arbimote · 2010-02-02T04:03:27.744Z · LW(p) · GW(p)
I'm concerned about the moral implications of creating intelligent beings with the intent of destroying them after they have served our needs [...]
Personally, I would rather be purposefully brought into existence for some limited time than to never exist at all, especially if my short life was enjoyable.
I evaluate the morality of possible AI experiments in a consequentialist way. If choosing to perform AI experiments significantly increases the likelihood of reaching our goals in this world, it is worth considering. The experiences of one sentient AI would be outweighed by the expected future gains in this world. (But nevertheless, we'd rather create an AI that experiences some sort of enjoyment, or at least does not experience pain.) A more important consideration is social side-effects of the decision - does choosing to experiment in this way set a bad precedent that could make us more likely to de-value artificial life in other situations in the future? And will this affect our long-term goals in other ways?
↑ comment by arbimote · 2010-02-02T04:38:26.884Z · LW(p) · GW(p)
If we are the kind of people who would delete lots of AIs, I don't see why AIs would not see it as similarly ethical to delete lots of us.
So just in case we are a simulated AI's simulation of its creators, we should not simulate an AI in a way it might not like? That's 3 levels of a very specific simulation hypothesis. Is there some property of our universe that suggests to you that this particular scenario is likely? For the purpose of seriously considering the simulation hypothesis and how to respond to it, we should make as few assumptions as possible.
More to the point, I think you are suggesting that the AI will have human-like morality, like taking moral cues from others, or responding to actions in a tit-for-tat manner. This is unlikely, unless we specifically program it to do so, or it thinks that is the best way to leverage our cooperation.
↑ comment by [deleted] · 2010-02-01T21:01:03.912Z · LW(p) · GW(p)
An idea that I've had in the past was playing a game of 20 Questions with the AI, since the game of 20 Questions has probably been played so many times that every possible sequence of answers has come up at least once, which is evidence that no sequence of answers is extremely dangerous.
Replies from: Cyan↑ comment by Cyan · 2010-02-01T21:12:46.652Z · LW(p) · GW(p)
It's not the sequence of answers that's the problem -- it's the questions. You'll be safe if you can vet the questions to ensure zero causal effect from any sequence of answers, but such questions are not interesting to ask almost by definition.
Replies from: None↑ comment by Kaj_Sotala · 2010-02-01T13:38:13.614Z · LW(p) · GW(p)
One questions how meaningful testing done on such a crippled AI would be.
Replies from: arbimote↑ comment by arbimote · 2010-02-01T14:17:05.585Z · LW(p) · GW(p)
You could observe how it acts in its simulated world, and hope it would act in a similar way if released into our world. ETA: Also, see my reply for possible single-bit tests.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T18:04:27.801Z · LW(p) · GW(p)
You could observe how it acts in its simulated world, and hope it would act in a similar way if released into our world.
Sounds like a rather drastic context change, and a rather forlorn hope if the AI figures out that it's being tested.
Replies from: blogospheroid, arbimote↑ comment by blogospheroid · 2010-02-02T09:29:46.356Z · LW(p) · GW(p)
"if the AI figures out that it's being tested"
That is a weird point, Eliezer.
An AI will have a certain goal to fulfill and it will fulfill that goal in the univese in which it finds itself. Why would it keep its cards hidden only to unleash them when replicated in the "real world"? What if the real world turns out to be another simulation? There's no end to this, right?
Are you extending Steve Omohundro's point about :every AI will want to survive" to "every AI will want to survive in the least simulated world that it can crack into?"
Replies from: CarlShulman↑ comment by CarlShulman · 2010-02-02T15:08:48.867Z · LW(p) · GW(p)
The basement is the biggest, and matters more for goals that benefit strongly from more resources/security.
Replies from: blogospheroid, arbimote↑ comment by blogospheroid · 2010-02-03T05:05:00.864Z · LW(p) · GW(p)
Carl,
Correct me if i misunderstood the implications of what you are saying.
Every AI that has a goal that benefits strongly from more resources and security will seek to crack into the basement. Lets call this AI, RO (resource oriented) pursuing goal G in simulation S1.
S1 is simulated in S2 and so on till Sn is basement, where value of n is unknown.
Implying, that as soon as RO understands the concept of simulation, it will seek to crack into the basement.
As long as RO has no idea about what are the real values of the simulators, RO cannot expand into S1 because whatever it does in S1 will be noticed in S2 and so on.
Sounds a bit like Pascal's mugging to me. Need to think more about this.
Replies from: CarlShulman↑ comment by CarlShulman · 2010-02-03T05:23:26.343Z · LW(p) · GW(p)
Why would RO seek to crack the basement immediately rather than at the best time according to its prior, evidence, and calculations?
Replies from: blogospheroid↑ comment by blogospheroid · 2010-02-03T05:47:58.619Z · LW(p) · GW(p)
Carl, I meant that as soon as RO understands the concept of a simulation, it will want to crack into the basement. It will seek to crack into the basement only when it understands the way out properly which may not be possible without an understanding of the simulators.
But the main point remains, as soon as RO understands what a simulation is, and it could be living in one and G can be pursued better when it manifests in S2 than in S1, then it will develop an extremely strong sub-goal to crack S1 to go to S2, which might mean that G may not be manifested for a long long time.
So, even a paperclipper may not act like a paperclipper in this universe if it is
- aware of the concept of a simulation
- believes that it is in one
- calculates that the simulator's beliefs are not paperclipper like (maybe it did convert some place to paperclips, and did not notice an increased data flow out, or something)
- calculates that it is better off hiding its paperclipperness till it can safely crack out of this one.
↑ comment by arbimote · 2010-02-02T02:48:36.902Z · LW(p) · GW(p)
I merely wanted to point out to Kaj that some "meaningful testing" could be done, even if the simulated world was drastically different from ours. I suspect that some core properties of intelligence would be the same regardless of what sort of world it existed in - so we are not crippling the AI by putting it in a world removed from our own.
Perhaps "if released into our world" wasn't the best choice of words... more likely, you would want to use the simulated AI as an empirical test of some design ideas, which could then be used in a separate AI being carefully designed to be friendly to our world.
↑ comment by Bugle · 2010-02-01T17:52:13.368Z · LW(p) · GW(p)
I guess if you have the technology for it the "AI box" could be a simulation with uploaded humans itself. If the AI does something nasty to them, then you pull the plug
(After broadcasting "neener neener" at it)
This is pretty much the plot of Grant Morrison's Zenith (Sorry for spoilers but it is a comic from the 80s after all)
↑ comment by Richard_Kennaway · 2010-02-01T14:44:37.080Z · LW(p) · GW(p)
If we pose the AI problems and observe its solutions, that's a communication channel through which it can persuade us. We may try to hide from it the knowledge that it is in a simulation and that we are watching it, but how can we be sure that it cannot discover that?
Persuading does not have to look like "Please let me out because of such and such." For example, we pose it a question about easy travel to other planets, and it produces a design for a spaceship that requires an AI such as itself to run it.
↑ comment by cousin_it · 2010-02-01T14:24:45.016Z · LW(p) · GW(p)
You could set up the virtual world to contain the problem you want solved. Now that I think of it, this seems a pretty safe way to use AIs for problem-solving: just give the AI a utility function expressed in terms of the virtual world and the problem. Anyone see holes in this plan?
Replies from: JamesAndrix↑ comment by JamesAndrix · 2010-02-01T16:47:53.440Z · LW(p) · GW(p)
Problem: It's really hard to figure out how it will interepret its utility function when it learns about the real world. If we make something that want Vpaperclips, will it also care about making Vpaperclip like things in the real world when if it finds out about us?
BIG problem: Even if it wants something strictly virtual, it can get it easier if it has physical control. It's in its interest to convert the universe to a computer and copy vpaperclips directly in memory, rather than running a virtual factory on virtual energy.
Possible solution: I think there are ways to write it a program such that even if it inferred our existence, it would optimize away from us, rather than over us. Loosely: A goal like "I need to organize these instructions within this block of memory to solve a problem specified at address X." needs to be implemented such that it produces a subgoal like "I need to write a subroutine to patch over the fact that an error in the VM I'm running on gives me a window of access into a universe with huge computation resources and godlike power over my memory space, so that my solution get get the right answer to it's arithmetic and sole the puzzle." It should want to do things in a way that isn't cheating.
This was my line of thought a week or so ago, It's developed now to the point that the proper course seems to do away with the VM entirely, or allowing the AI to run tests, and just have it go through the motions of working out a solution based on it's understanding. If I could write an AI that can determine it needs to put an IF statement somewhere, actually outputting it is superfluous. Don't put your AI in a virtual world, just make it understand one.
Also, I plan to start development on a spiral notebook, as opposed to a linux one.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T18:03:22.897Z · LW(p) · GW(p)
Possible solution: I think there are ways to write it a program such that even if it inferred our existence, it would optimize away from us, rather than over us. Loosely: A goal like "I need to organize these instructions within this block of memory to solve a problem specified at address X." needs to be implemented such that it produces a subgoal like "I need to write a subroutine to patch over the fact that an error in the VM I'm running on gives me a window of access into a universe with huge computation resources and godlike power over my memory space, so that my solution get get the right answer to it's arithmetic and sole the puzzle." It should want to do things in a way that isn't cheating.
Marcello had a crazy idea for doing this; it's the only suggestion for AI-boxing I've ever heard that doesn't have an obvious cloud of doom hanging over it. However, you still have to prove stability of the boxed AI's goal system.
Replies from: wnoisecomment by Kevin · 2010-02-04T00:13:10.520Z · LW(p) · GW(p)
Physicist Discovers How to Teleport Energy
http://www.technologyreview.com/blog/arxiv/24759/
Energy-Entanglement Relation for Quantum Energy Teleportation
comment by nawitus · 2010-02-01T18:35:00.509Z · LW(p) · GW(p)
Does the MWI make rationality irrelevant? All choices are done in some universe (because there's atleast one extremely improbable quantum event which arranges the particles in your brain to make any choice). Therefore, you will make the correct choice in atleast 1 universe.
Of course, this leads to the problems of continuing conscious experience (or the lack of), and whether you should care of what happens to you in all the possible future worlds that you will exist in.
Replies from: Mycroft65536, orthonormal, EStokes, LucasSloan↑ comment by Mycroft65536 · 2010-02-02T03:19:20.558Z · LW(p) · GW(p)
That doesn't just make rationality irrelevant, it makes everything irrelevant. Love doesn't matter because you don't meet that special someone in every world, and will meet them in at least one world. Education doesn't matter because guessing will get you right somewhere.
I want to be happy and right in as many worlds as possible. Rationality matters.
↑ comment by orthonormal · 2010-02-02T02:41:21.881Z · LW(p) · GW(p)
I want more copies of me to make the correct choice.
Cf. this thread, which is relevant here.
↑ comment by LucasSloan · 2010-02-02T00:06:41.100Z · LW(p) · GW(p)
This might be easier to consider as the simpler case of "given we live in a deterministic universe, what does any choice I make matter?" I would say that I still have to make decisions of how to act and choosing not to act is also a choice, so I should do what ever it is that I want to do.
http://wiki.lesswrong.com/wiki/Free_will
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-02-02T03:39:02.322Z · LW(p) · GW(p)
That's not the same problem, though Egan's Law is equally applicable to both. An agent might have no confusion over free will, have clear preferences and act normally on them in a single deterministic world, but not care about quantum measure and thus be a nihilist in many-worlds. (Actually, if such an agent seems to be in MW, it should by its preferences proceed under the Pascalian assumption that it lives in a single world and is being deceived.)
Nick Bostrom has a couple of papers on this:
Replies from: LucasSloan↑ comment by LucasSloan · 2010-02-02T05:18:36.788Z · LW(p) · GW(p)
Could you explain that more? As far as I can see, an agent which doesn't care about measure would engage in high rate quantum suicide.